#### Sample records for energy interval approximation

1. Optimal Approximation of Quadratic Interval Functions

NASA Technical Reports Server (NTRS)

Koshelev, Misha; Taillibert, Patrick

1997-01-01

Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

2. Function approximation using adaptive and overlapping intervals

SciTech Connect

Patil, R.B.

1995-05-01

A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

3. Improved cosmic ray ionization model for the system lower ionosphere-middle atmosphere. Determination of approximation energy interval characteristics for the particle penetration

Velinov, Peter; Mateev, Lachezar

The effects of galactic and solar cosmic rays (CRs) in the middle atmosphere are considered in this work. We take into account the CR modulation by solar wind and the anomalous CR component also. In fact, CRs determine the electric conductivity in the middle atmosphere and influence the electric processes in it in this way. CRs introduce solar variability in the terrestrial atmosphere and ozonosphere -because they are modulated by solar wind. A new analytical approach for CR ionization by protons and nuclei with charge Z in the lower ionosphere and the middle atmosphere is developed in this paper. For this purpose, the ionization losses (dE/dh) for the energetic charged particles according to the Bohr-Bethe-Bloch formula are approximated in three different energy intervals. More accurate expressions for CR energy decrease E(h) and electron production rate profiles q(h) are derived. The obtained formulas allow comparatively easy computer programming. q(h) is determined by the solution of a 3D integral with account of geomagnetic cut-off rigidity. The integrand in q(h) gives the possibility for application of adequate numerical methods -in this case Gauss quadrature and Romberg extrapolation, for the solution of the mathematical problem. Computations for CR ionization in the middle atmosphere are made. The contributions of the different approximation energy intervals are presented. In this way the process of interaction of CR particles with the upper and middle atmosphere are described much more realistically. The full CR composition is taken into account: protons, helium (alpha-particles), light L, medium M, heavy H and very heavy VH group of nuclei. The computations are made for different geomagnetic cut-off rigidities R in the altitude interval 35-120 km. The COSPAR International Reference Atmosphere CIRA'86 is applied in the computer program for the neutral density and scale height values. The proposed improved CR ionization model will contribute to the

4. A comparison of approximate interval estimators for the Bernoulli parameter

NASA Technical Reports Server (NTRS)

Leemis, Lawrence; Trivedi, Kishor S.

1993-01-01

The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

5. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

ERIC Educational Resources Information Center

Cheung, Mike W. -L.

2009-01-01

Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

6. A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators

PubMed Central

Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong

2014-01-01

Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065

7. Approximate representations of random intervals for hybrid uncertainty quantification in engineering modeling

SciTech Connect

Joslyn, C.

2004-01-01

We review our approach to the representation and propagation of hybrid uncertainties through high-complexity models, based on quantities known as random intervals. These structures have a variety of mathematical descriptions, for example as interval-valued random variables, statistical collections of intervals, or Dempster-Shafer bodies of evidence on the Borel field. But methods which provide simpler, albeit approximate, representations of random intervals are highly desirable, including p-boxes and traces. Each random interval, through its cumulative belief and plausibility measures functions, generates a unique p-box whose constituent CDFs are all of those consistent with the random interval. In turn, each p-box generates an equivalence class of random intervals consistent with it. Then, each p-box necessarily generates a unique trace which stands as the fuzzy set representation of the p-box or random interval. In turn each trace generates an equivalence class of p-boxes. The heart of our approach is to try to understand the tradeoffs between error and simplicity introduced when p-boxes or traces are used to stand in for various random interval operations. For example, Joslyn has argued that for elicitation and representation tasks, traces can be the most appropriate structure, and has proposed a method for the generation of canonical random intervals from elicited traces. But alternatively, models built as algebraic equations of uncertainty-valued variables (in our case, random-interval-valued) propagate uncertainty through convolution operations on basic algebraic expressions, and while convolution operations are defined on all three structures, we have observed that the results of only some of these operations are preserved as one moves through these three levels of specificity. We report on the status and progress of this modeling approach concerning the relations between these mathematical structures within this overall framework.

8. An approximation of interval type-2 fuzzy controllers using fuzzy ratio switching type-1 fuzzy controllers.

PubMed

Tao, C W; Taur, Jinshiuh; Chuang, Chen-Chia; Chang, Chia-Wen; Chang, Yeong-Hwa

2011-06-01

In this paper, the interval type-2 fuzzy controllers (FC(IT2)s) are approximated using the fuzzy ratio switching type-1 FCs to avoid the complex type-reduction process required for the interval type-2 FCs. The fuzzy ratio switching type-1 FCs (FC(FRST1)s) are designed to be a fuzzy combination of the possible-leftmost and possible-rightmost type-1 FCs. The fuzzy ratio switching type-1 fuzzy control technique is applied with the sliding control technique to realize the hybrid fuzzy ratio switching type-1 fuzzy sliding controllers (HFSC(FRST1)s) for the double-pendulum-and-cart system. The simulation results and comparisons with other approaches are provided to demonstrate the effectiveness of the proposed HFSC(FRST1)s. PMID:21189244

9. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

Niklasson, Gunnar A.; Niklasson, Maria H.

2015-11-01

The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

10. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

USGS Publications Warehouse

Hill, M.C.

1989-01-01

Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

11. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken; Lai, Keke

2011-01-01

The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

12. Kinetic energy density dependent approximations to the exchange energy

Ernzerhof, Matthias; Scuseria, Gustavo E.

1999-07-01

Two nonempirical kinetic energy density dependent approximations are introduced. First, the local τ approximation (LTA) is proposed in which the exchange energy Ex depends only on a kinetic energy density τ. This LTA scheme appears to be complementary to the local spin density (LSD) approximation in the sense that its exchange contribution to the atomization energy ΔEx=Exatoms-Exmolecule is fairly accurate for systems where LSD fails. On the other hand, in cases where LSD works well LTA results for ΔEx are worse. Secondly, the τPBE approximation to Ex is developed which combines some of the advantages of LTA and of the Perdew-Burke-Ernzerhof (PBE) exchange functional. Like the PBE exchange functional, τPBE is free of empirical parameters. Furthermore, it yields improved atomization energies compared to the PBE approximation.

13. Energy Equation Approximation in Fluid Mechanics

NASA Technical Reports Server (NTRS)

Goldstein, Arthur W.

1959-01-01

There is some confusion in the literature of fluid mechanics in regard to the correct form of the energy equation for the study of the flow of nearly incompressible fluids. Several forms of the energy equation and their use are therefore discussed in this note.

14. Proportional damping approximation using the energy gain and simultaneous perturbation stochastic approximation

Sultan, Cornel

2010-10-01

The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.

15. Energy flow: image correspondence approximation for motion analysis

Wang, Liangliang; Li, Ruifeng; Fang, Yajun

2016-04-01

We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

16. Bethe free-energy approximations for disordered quantum systems

Biazzo, I.; Ramezanpour, A.

2014-06-01

Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.

17. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

PubMed

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-01

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested. PMID:25481124

18. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

SciTech Connect

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-07

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

19. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-01

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

20. Flux tube spectra from approximate integrability at low energies

SciTech Connect

Dubovsky, S. Flauger, R.; Gorbenko, V.

2015-03-15

We provide a detailed introduction to a method we recently proposed for calculating the spectrum of excitations of effective strings such as QCD flux tubes. The method relies on the approximate integrability of the low-energy effective theory describing the flux tube excitations and is based on the thermodynamic Bethe ansatz. The approximate integrability is a consequence of the Lorentz symmetry of QCD. For excited states, the convergence of the thermodynamic Bethe ansatz technique is significantly better than that of the traditional perturbative approach. We apply the new technique to the lattice spectra for fundamental flux tubes in gluodynamics in D = 3 + 1 and D = 2 + 1, and to k-strings in gluodynamics in D = 2 + 1. We identify a massive pseudoscalar resonance on the worldsheet of the confining strings in SU(3) gluodynamics in D = 3 + 1, and massive scalar resonances on the worldsheet of k = 2.3 strings in SU(6) gluodynamics in D = 2 + 1.

1. Approximate scaling properties of RNA free energy landscapes

NASA Technical Reports Server (NTRS)

1996-01-01

RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

2. Approximating ground and excited state energies on a quantum computer

2015-04-01

Approximating ground and a fixed number of excited state energies, or equivalently low-order Hamiltonian eigenvalues, is an important but computationally hard problem. Typically, the cost of classical deterministic algorithms grows exponentially with the number of degrees of freedom. Under general conditions, and using a perturbation approach, we provide a quantum algorithm that produces estimates of a constant number of different low-order eigenvalues. The algorithm relies on a set of trial eigenvectors, whose construction depends on the particular Hamiltonian properties. We illustrate our results by considering a special case of the time-independent Schrödinger equation with degrees of freedom. Our algorithm computes estimates of a constant number of different low-order eigenvalues with error and success probability at least , with cost polynomial in and . This extends our earlier results on algorithms for estimating the ground state energy. The technique we present is sufficiently general to apply to problems beyond the application studied in this paper.

3. Hadron Production in the Restricted Rapidity Intervals in Proton-Nucleus Interactions at High Energies

Data on 200 and 400 GeV proton interactions with nuclear emulsion have been analyzed. It is found that the multiplicity distributions of the shower particles in the restricted rapidity intervals are well described by the negative binomial distribution (NBD). The dependences of the NBD parameters on rapidity interval, energy and target size have been studied. The results have also been discussed in terms of Giovannini and Van Hove’s clan model of multiparticle production.

4. Excitation energies from extended random phase approximation employed with approximate one- and two-electron reduced density matrices

Chatterjee, Koushik; Pernal, Katarzyna

2012-11-01

Starting from Rowe's equation of motion we derive extended random phase approximation (ERPA) equations for excitation energies. The ERPA matrix elements are expressed in terms of the correlated ground state one- and two-electron reduced density matrices, 1- and 2-RDM, respectively. Three ways of obtaining approximate 2-RDM are considered: linearization of the ERPA equations, obtaining 2-RDM from density matrix functionals, and employing 2-RDM corresponding to an antisymmetrized product of strongly orthogonal geminals (APSG) ansatz. Applying the ERPA equations with the exact 2-RDM to a hydrogen molecule reveals that the resulting ^1Σ _g^+ excitation energies are not exact. A correction to the ERPA excitation operator involving some double excitations is proposed leading to the ERPA2 approach, which employs the APSG one- and two-electron reduced density matrices. For two-electron systems ERPA2 satisfies a consistency condition and yields exact singlet excitations. It is shown that 2-RDM corresponding to the APSG theory employed in the ERPA2 equations yields excellent singlet excitation energies for Be and LiH systems, and for the N2 molecule the quality of the potential energy curves is at the coupled cluster singles and doubles level. ERPA2 nearly satisfies the consistency condition for small molecules that partially explains its good performance.

5. Multimode approximation for {sup 238}U photofission at intermediate energies

SciTech Connect

Demekhina, N. A.; Karapetyan, G. S.

2008-01-15

The yields of products originating from {sup 238}U photofission are measured at the bremsstrahlung endpoint energies of 50 and 3500 MeV. Charge and mass distributions of fission fragments are obtained. Symmetric and asymmetric channels in {sup 238}U photofission are singled out on the basis of the model of multimode fission. This decomposition makes it possible to estimate the contributions of various fission components and to calculate the fissilities of {sup 238}U in the photon-energy regions under study.

6. Multimode approximation for {sup 238}U photofission at intermediate energies

SciTech Connect

Demekhina, N. A. Karapetyan, G. S.

2008-01-15

The yields of products originating from {sup 238}U photofission are measured at the Bremsstrahlung endpoint energies of 50 and 3500 MeV. Charge and mass distributions of fission fragments are obtained. Symmetric and asymmetric channels in {sup 238}U photofission are singled out on the basis of the model of multimode fission. This decomposition makes it possible to estimate the contributions of various fission components and to calculate the fissilities of {sup 238}U in the photon-energy regions under study.

7. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

NASA Technical Reports Server (NTRS)

Good, Brian S.

2003-01-01

We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

8. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

RIngenburg, Michael F.

Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in

9. Correlation energy for the homogeneous electron gas: Exact Bethe-Salpeter solution and an approximate evaluation

Maggio, Emanuele; Kresse, Georg

2016-06-01

The correlation energy of the homogeneous electron gas is evaluated by solving the Bethe-Salpeter equation (BSE) beyond the Tamm-Dancoff approximation for the electronic polarization propagator. The BSE is expected to improve on the random-phase approximation, owing to the inclusion of exchange diagrams. For instance, since the BSE reduces in second order to Møller-Plesset perturbation theory, it is self-interaction free in second order. Results for the correlation energy are compared with quantum Monte Carlo benchmarks and excellent agreement is observed. For low densities, however, we find imaginary eigenmodes in the polarization propagator. To avoid the occurrence of imaginary eigenmodes, an approximation to the BSE kernel is proposed that allows us to completely remove this issue in the low-electron-density region. We refer to this approximation as the random-phase approximation with screened exchange (RPAsX). We show that this approximation even slightly improves upon the standard BSE kernel.

10. Interval Data Analysis with the Energy Charting and Metrics Tool (ECAM)

SciTech Connect

Taasevigen, Danny J.; Katipamula, Srinivas; Koran, William

2011-07-07

Analyzing whole building interval data is an inexpensive but effective way to identify and improve building operations, and ultimately save money. Utilizing the Energy Charting and Metrics Tool (ECAM) add-in for Microsoft Excel, building operators and managers can begin implementing changes to their Building Automation System (BAS) after trending the interval data. The two data components needed for full analyses are whole building electricity consumption (kW or kWh) and outdoor air temperature (OAT). Using these two pieces of information, a series of plots and charts and be created in ECAM to monitor the buildings performance over time, gain knowledge of how the building is operating, and make adjustments to the BAS to improve efficiency and start saving money.

11. A pediatric correlational study of stride interval dynamics, energy expenditure and activity level.

PubMed

Ellis, Denine; Sejdic, Ervin; Zabjek, Karl; Chau, Tom

2014-08-01

The strength of time-dependent correlations known as stride interval (SI) dynamics has been proposed as an indicator of neurologically healthy gait. Most recently, it has been hypothesized that these dynamics may be necessary for gait efficiency although the supporting evidence to date is scant. The current study examines over-ground SI dynamics, and their relationship with the cost of walking and physical activity levels in neurologically healthy children aged nine to 15 years. Twenty participants completed a single experimental session consisting of three phases: 10 min resting, 15 min walking and 10 min recovery. The scaling exponent (α) was used to characterize SI dynamics while net energy cost was measured using a portable metabolic cart, and physical activity levels were determined based on a 7-day recall questionnaire. No significant linear relationships were found between a and the net energy cost measures (r < .07; p > .25) or between α and physical activity levels (r = .01, p = .62). However, there was a marked reduction in the variance of α as activity levels increased. Over-ground stride dynamics do not appear to directly reflect energy conservation of gait in neurologically healthy youth. However, the reduction in the variance of α with increasing physical activity suggests a potential exercise-moderated convergence toward a level of stride interval persistence for able-bodied youth reported in the literature. This latter finding warrants further investigation. PMID:24722770

12. Optimising sprint interval exercise to maximise energy expenditure and enjoyment in overweight boys.

PubMed

Crisp, Nicole A; Fournier, Paul A; Licari, Melissa K; Braham, Rebecca; Guelfi, Kym J

2012-12-01

The aim of this study was to identify the sprint frequency that when supplemented to continuous exercise at the intensity that maximises fat oxidation (Fat(max)), optimises energy expenditure, acute postexercise energy intake and enjoyment. Eleven overweight boys completed 30 min of either continuous cycling at Fat(max) (MOD), or sprint interval exercise that consisted of continuous cycling at Fat(max) interspersed with 4-s maximal sprints every 2 min (SI(120)), every 1 min (SI(60)), or every 30 s (SI(30)). Energy expenditure was assessed during exercise, after which participants completed a modified Physical Activity Enjoyment Scale (PACES) followed by a buffet-type breakfast to measure acute postexercise energy intake. Energy expenditure increased with increasing sprint frequency (p < 0.001), but the difference between SI(60) and SI(30) did not reach significance (p = 0.076), likely as a result of decreased sprint quality as indicated by a significant decline in peak power output from SI(60) to SI(30) (p = 0.034). Postexercise energy intake was similar for MOD, SI(120), and SI(30) (p > 0.05), but was significantly less for SI(60) compared with MOD (p = 0.025). PACES was similar for MOD, SI(120), and SI(60) (p > 0.05), but was less for SI(30) compared with MOD (p = 0.038), SI(120) (p = 0.009), and SI(60) (p = 0.052). In conclusion, SI(60) appears optimal for overweight boys given that it maximises energy expenditure (i.e., there was no additional increase in expenditure with a further increase in sprint frequency) without prompting increased energy intake. This, coupled with the fact that enjoyment was not compromised, may have important implications for increased adherence and long-term energy balance. PMID:23176528

13. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

SciTech Connect

Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

2015-04-01

We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

14. The performance of density functional approximations for the structures and relative energies of minimum energy crossing points

Abate, Bayileyegn A.; Peralta, Juan E.

2013-12-01

The structural parameters and relative energies of the minimum-energy crossing points (MECPs) of eight small molecules are calculated using five different representative density functional theory approximations as well as MP2, MP4, and CCSD(T) as a reference. Compared to high-level wavefunction methods, the main structural features of the MECPs of the systems included in this Letter are reproduced reasonably well by density functional approximations, in agreement with previous works. Our results show that when high-level wavefunction methods are computationally prohibitive, density functional approximations offer a good alternative for locating and characterizing the MECP in spin-forbidden chemical reactions.

15. Analysis of localized diabatic states beyond the condon approximation for excitation energy transfer processes.

PubMed

Alguire, Ethan C; Fatehi, Shervin; Shao, Yihan; Subotnik, Joseph E

2014-12-26

In a previous paper [ Fatehi , S. ; et al. J. Chem. Phys. 2013 , 139 , 124112 ], we demonstrated a practical method by which analytic derivative couplings of Boys-localized CIS states can be obtained. In this paper, we now apply that same method to the analysis of triplet-triplet energy transfer systems studied by Closs and collaborators [ Closs , G. L. ; et al. J. Am. Chem. Soc. 1988 , 110 , 2652 ]. For the systems examined, we are able to conclude that (i) the derivative coupling in the BoysOV basis is negligible, and (ii) the diabatic coupling will likely change little over the configuration space explored at room temperature. Furthermore, we propose and evaluate an approximation that allows for the inexpensive calculation of accurate diabatic energy gradients, called the "strictly diabatic" approximation. This work highlights the effectiveness of diabatic state analytic gradient theory in realistic systems and demonstrates that localized diabatic states can serve as an acceptable approximation to strictly diabatic states. PMID:24447246

16. Two-loop Bhabha scattering at high energy beyond leading power approximation

Penin, Alexander A.; Zerf, Nikolai

2016-09-01

We evaluate the two-loop O (me2/ s) contribution to the wide-angle high-energy electron-positron scattering in the double-logarithmic approximation. The origin and the general structure of the power-suppressed double logarithmic corrections are discussed in detail.

17. Two-loop Bhabha scattering at high energy beyond leading power approximation

Penin, Alexander A.; Zerf, Nikolai

2016-09-01

We evaluate the two-loop O (me2/s) contribution to the wide-angle high-energy electron-positron scattering in the double-logarithmic approximation. The origin and the general structure of the power-suppressed double logarithmic corrections are discussed in detail.

18. Equivalence of particle-particle random phase approximation correlation energy and ladder-coupled-cluster doubles.

PubMed

Peng, Degao; Steinmann, Stephan N; van Aggelen, Helen; Yang, Weitao

2013-09-14

The recent proposal to determine the (exact) correlation energy based on pairing matrix fluctuations by van Aggelen et al. ["Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation," preprint arXiv:1306.4957 (2013)] revived the interest in the simplest approximation along this path: the particle-particle random phase approximation (pp-RPA). In this paper, we present an analytical connection and numerical demonstrations of the equivalence of the correlation energy from pp-RPA and ladder-coupled-cluster doubles. These two theories reduce to identical algebraic matrix equations and correlation energy expressions. The numerical examples illustrate that the correlation energy missed by pp-RPA in comparison with coupled-cluster singles and doubles is largely canceled out when considering reaction energies. This theoretical connection will be beneficial to design density functionals with strong ties to coupled-cluster theories and to study molecular properties at the pp-RPA level relying on well established coupled cluster techniques. PMID:24050333

19. Dielectric Matrix Formulation of Correlation Energies in the Random Phase Approximation: Inclusion of Exchange Effects.

PubMed

Mussard, Bastien; Rocca, Dario; Jansen, Georg; Ángyán, János G

2016-05-10

Starting from the general expression for the ground state correlation energy in the adiabatic-connection fluctuation-dissipation theorem (ACFDT) framework, it is shown that the dielectric matrix formulation, which is usually applied to calculate the direct random phase approximation (dRPA) correlation energy, can be used for alternative RPA expressions including exchange effects. Within this famework, the ACFDT analog of the second order screened exchange (SOSEX) approximation leads to a logarithmic formula for the correlation energy similar to the direct RPA expression. Alternatively, the contribution of the exchange can be included in the kernel used to evaluate the response functions. In this case, the use of an approximate kernel is crucial to simplify the formalism and to obtain a correlation energy in logarithmic form. Technical details of the implementation of these methods are discussed, and it is shown that one can take advantage of density fitting or Cholesky decomposition techniques to improve the computational efficiency; a discussion on the numerical quadrature made on the frequency variable is also provided. A series of test calculations on atomic correlation energies and molecular reaction energies shows that exchange effects are instrumental for improvement over direct RPA results. PMID:26986444

20. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals

Woods, Thomas N.; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

2015-10-01

A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

1. Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation

SciTech Connect

Aggelen, Helen van; Department of Chemistry, Duke University, Durham, North Carolina 27708 ; Yang, Yang; Yang, Weitao

2014-05-14

Despite their unmatched success for many applications, commonly used local, semi-local, and hybrid density functionals still face challenges when it comes to describing long-range interactions, static correlation, and electron delocalization. Density functionals of both the occupied and virtual orbitals are able to address these problems. The particle-hole (ph-) Random Phase Approximation (RPA), a functional of occupied and virtual orbitals, has recently known a revival within the density functional theory community. Following up on an idea introduced in our recent communication [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)], we formulate more general adiabatic connections for the correlation energy in terms of pairing matrix fluctuations described by the particle-particle (pp-) propagator. With numerical examples of the pp-RPA, the lowest-order approximation to the pp-propagator, we illustrate the potential of density functional approximations based on pairing matrix fluctuations. The pp-RPA is size-extensive, self-interaction free, fully anti-symmetric, describes the strong static correlation limit in H{sub 2}, and eliminates delocalization errors in H{sub 2}{sup +} and other single-bond systems. It gives surprisingly good non-bonded interaction energies – competitive with the ph-RPA – with the correct R{sup −6} asymptotic decay as a function of the separation R, which we argue is mainly attributable to its correct second-order energy term. While the pp-RPA tends to underestimate absolute correlation energies, it gives good relative energies: much better atomization energies than the ph-RPA, as it has no tendency to underbind, and reaction energies of similar quality. The adiabatic connection in terms of pairing matrix fluctuation paves the way for promising new density functional approximations.

2. Energy intake over 2 days is unaffected by acute sprint interval exercise despite increased appetite and energy expenditure.

PubMed

Beaulieu, Kristine; Olver, T Dylan; Abbott, Kolten C; Lemon, Peter W R

2015-01-01

A cumulative effect of reduced energy intake, increased oxygen consumption, and/or increased lipid oxidation could explain the fat loss associated with sprint interval exercise training (SIT). This study assessed the effects of acute sprint interval exercise (SIE) on energy intake, subjective appetite, appetite-related peptides, oxygen consumption, and respiratory exchange ratio over 2 days. Eight men (25 ± 3 years, 79.6 ± 9.7 kg, body fat 13% ± 6%; mean ± SD) completed 2 experimental treatments: SIE and recovery (SIEx) and nonexercise control. Each 34-h treatment consisted of 2 consecutive 10-h test days. Between 0800-1800 h, participants remained in the laboratory for 8 breath-by-breath gas collections, 3 buffet-type meals, 14 appetite ratings, and 4 blood samples for appetite-related peptides. Treatment comparisons were made using 2-way repeated measures ANOVA or t tests. An immediate, albeit short-lived (<1 h), postexercise suppression of appetite and increase in peptide YY (PYY) were observed (P < 0.001). However, overall hunger and motivation to eat were greater during SIEx (P < 0.02) without affecting energy intake. Total 34-h oxygen consumption was greater during SIEx (P = 0.04), elicited by the 1491-kJ (22%) greater energy expenditure over the first 24 h (P = 0.01). Despite its effects on oxygen consumption, appetite, and PYY, acute SIE did not affect energy intake. Consequently, if these dietary responses to SIE are sustained with regular SIT, augmentations in oxygen consumption and/or a substrate shift toward increased fat use postexercise are most likely responsible for the observed body fat loss with this type of exercise training. PMID:25494974

3. Slope-dependent nuclear-symmetry energy within the effective-surface approximation

Blocki, J. P.; Magner, A. G.; Ring, P.

2015-12-01

The effective-surface approximation is extended taking into account derivatives of the symmetry-energy density per particle with respect to the mean particle density. The isoscalar and isovector particle densities in this extended effective-surface approximation are derived. The improved expressions of the surface symmetry energy, in particular, its surface tension coefficients in the sharp-edged proton-neutron asymmetric nuclei take into account important gradient terms of the energy density functional. For most Skyrme forces the surface symmetry-energy constants and the corresponding neutron skins and isovector stiffnesses are calculated as functions of the Swiatecki derivative of the nongradient term of the symmetry-energy density per particle with respect to the isoscalar density. Using the analytical isovector surface-energy constants in the framework of the Fermi-liquid droplet model we find energies and sum rules of the isovector giant dipole-resonance structure in a reasonable agreement with the experimental data, and they are compared with other theoretical approaches.

4. Molecular tests of the random phase approximation to the exchange-correlation energy functional

Furche, Filipp

2001-11-01

The exchange-correlation energy functional within the random phase approximation (RPA) is recast into an explicitly orbital-dependent form. A method to evaluate the functional in finite basis sets is introduced. The basis set dependence of the RPA correlation energy is analyzed. Extrapolation using large, correlation-consistent basis sets is essential for accurate estimates of RPA correlation energies. The potential energy curve of N2 is discussed. The RPA is found to recover most of the strong static correlation at large bond distance. Atomization energies of main-group molecules are rather uniformly underestimated by the RPA. The method performs better than generalized-gradient-type approximations (GGA's) only for some electron-rich systems. However, the RPA functional is free of error cancellation between exchange and correlation, and behaves qualitatively correct in the high-density limit, as is demonstrated by the coupling strength decomposition of the atomization energy of F2. The GGA short-range correlation correction to the RPA by Yan, Perdew, and Kurth [Phys. Rev. B 61, 16 430 (2000)] does not seem to improve atomization energies consistently.

5. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

NASA Technical Reports Server (NTRS)

Doremus, R. H.

1982-01-01

It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

6. An evaluation of energy-independent heavy ion transport coefficient approximations.

PubMed

Townsend, L W; Wilson, J W

1988-04-01

Using a one-dimensional transport theory for laboratory heavy ion propagation, evaluations of typical energy-independent transport coefficient approximations are made by comparing theoretical depth-dose predictions to published experimental values for incident 670 MeV/nucleon 20Ne beams in water. Results are presented for cases where the input nuclear absorption cross sections, or input fragmentation parameters, or both, are fixed. PMID:3350661

7. Quantitative molecular orbital energies within a G0W0 approximation

Sharifzadeh, S.; Tamblyn, I.; Doak, P.; Darancet, P. T.; Neaton, J. B.

2012-09-01

Using many-body perturbation theory within a G 0 W 0 approximation, with a plane wave basis set and using a starting point based on density functional theory within the generalized gradient approximation, we explore routes for computing the ionization potential (IP), electron affinity (EA), and fundamental gap of three gas-phase molecules — benzene, thiophene, and (1,4) diamino-benzene — and compare with experiments. We examine the dependence of the IP and fundamental gap on the number of unoccupied states used to represent the dielectric function and the self energy, as well as the dielectric function plane-wave cutoff. We find that with an effective completion strategy for approximating the unoccupied subspace, and a well converged dielectric function kinetic energy cutoff, the computed IPs and EAs are in excellent quantitative agreement with available experiment (within 0.2 eV), indicating that a one-shot G 0 W 0 approach can be very accurate for calculating addition/removal energies of small organic molecules.

8. Introducing electron capture into the unitary-convolution-approximation energy-loss theory at low velocities

SciTech Connect

Schiwietz, G.; Grande, P. L.

2011-11-15

Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.

9. Numerical calculation of cosmic ray ionization rate profiles in the middle atmosphere and lower ionosphere with relation to characteristic energy intervals

Velinov, Peter; Asenovski, Simeon; Mateev, Lachezar

2013-04-01

Numerical calculations of galactic cosmic ray (GCR) ionization rate profiles are presented for the middle atmosphere and lower ionosphere altitudes (35-90 km) for the full GCR composition (protons, alpha particles, and groups of heavier nuclei: light L, medium M, heavy H, very heavy VH). This investigation is based on a model developed by Velinov et al. (1974) and Velinov and Mateev (2008), which is further improved in the present paper. Analytical expressions for energy interval contributions are provided. An approximation of the ionization function on three energy intervals is used and for the first time the charge decrease interval for electron capturing (Dorman 2004) is investigated quantitatively. Development in this field of research is important for better understanding the impact of space weather on the atmosphere. GCRs influence the ionization and electric parameters in the atmosphere and also the chemical processes (ozone creation and depletion in the stratosphere) in it. The model results show good agreement with experimental data (Brasseur and Solomon 1986, Rosenberg and Lanzerotti 1979, Van Allen 1952).

10. Correlation matrix renormalization approximation for total-energy calculations of correlated electron systems

SciTech Connect

Yao, Y. X.; Liu, Jun; Wang, Cai-Zhuang; Ho, Kai-Ming

2014-01-23

We generalized the commonly used Gutzwiller approximation for calculating the electronic structure and total energy of strongly correlated electron systems. In our method, the evaluation of one-body and two-body density matrix elements of the Hamiltonian is simplified using a renormalization approximation to achieve better scaling of the computational effort as a function of system size. To achieve a clear presentation of the concept and methodology, we describe the detailed formalism for a finite hydrogen system with minimal basis set. We applied the correlation matrix renormalization approximation approach to a H2 dimer and H8 cubic fragment with minimal basis sets, as well as a H2 molecule with a large basis set. The results compare favorably with sophisticated quantum chemical calculations. We believe our approach can serve as an alternative way to build up the exchange-correlation energy functional for an improved density functional theory description of systems with strong electron correlations.

11. Eikonal approximation in the theory of energy loss by fast charged particles

Matveev, V. I.; Makarov, D. N.; Gusarevich, E. S.

2011-05-01

Energy losses in fast charged particles as a result of collisions with atoms are considered in the eikonal approximation. It is shown that the nonperturbative contribution to effective stopping in the range of intermediate impact parameters (comparable with the characteristic sizes of the electron shells of the target atoms) may turn out to be significant as compared to shell corrections to the Bethe-Bloch formula calculated in perturbation theory. The simplifying assumptions are formulated under which the Bethe-Bloch formula can be derived in the eikonal approximation. It is shown that the allowance for nonperturbative effects may lead to considerable (up to 50%) corrections to the Bethe-Bloch formula. The applicability range for the Bethe-Bloch formula is analyzed. It is concluded that calculation of the energy loss in the eikonal approximation (in the range of impact parameters for which the Bethe-Bloch formula is normally used) is much more advantageous than analysis based on the Bethe-Bloch formula and its modifications because not only the Bloch correction is included in the former calculations, the range of intermediate impact parameters is also taken into account nonperturbatively; in addition, direct generalization to the cases of collisions of complex projectiles and targets is possible in this case.

12. Low-energy extensions of the eikonal approximation to heavy-ion scattering

SciTech Connect

Aguiar, C.E.; Aguiar, C.E.; Zardi, F.; Vitturi, A.

1997-09-01

We discuss different schemes devised to extend the eikonal approximation to the regime of low bombarding energies (below 50 MeV per nucleon) in heavy-ion collisions. From one side we consider the first- and second-order corrections derived from Wallace{close_quote}s expansion. As an alternative approach we examine the procedure of accounting for the distortion of the eikonal straight-line trajectory by shifting the impact parameter to the corresponding classical turning point. The two methods are tested for different combinations of colliding systems and bombarding energies, by comparing the angular distributions they provide with the exact solution of the scattering problem. We find that the best results are obtained with the shifted trajectories, the Wallace expansion showing a slow convergence at low energies, in particular for heavy systems characterized by a strong Coulomb field. {copyright} {ital 1997} {ital The American Physical Society}

13. Proximity force approximation for the Casimir energy as a derivative expansion

Fosco, César D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

2011-11-01

The proximity force approximation (PFA) has been widely used as a tool to evaluate the Casimir force between smooth objects at small distances. In spite of being intuitively easy to grasp, it is generally believed to be an uncontrolled approximation. Indeed, its validity has only been tested in particular examples, by confronting its predictions with the next-to-leading-order (NTLO) correction extracted from numerical or analytical solutions obtained without using the PFA. In this article we show that the PFA and its NTLO correction may be derived within a single framework, as the first two terms in a derivative expansion. To that effect, we consider the Casimir energy for a vacuum scalar field with Dirichlet conditions on a smooth curved surface described by a function ψ in front of a plane. By regarding the Casimir energy as a functional of ψ, we show that the PFA is the leading term in a derivative expansion of this functional. We also obtain the general form of the corresponding NTLO correction, which involves two derivatives of ψ. We show, by evaluating this correction term for particular geometries, that it properly reproduces the known corrections to PFA obtained from exact evaluations of the energy.

14. Energy transfer in structured and unstructured environments: Master equations beyond the Born-Markov approximations.

PubMed

Iles-Smith, Jake; Dijkstra, Arend G; Lambert, Neill; Nazir, Ahsan

2016-01-28

We explore excitonic energy transfer dynamics in a molecular dimer system coupled to both structured and unstructured oscillator environments. By extending the reaction coordinate master equation technique developed by Iles-Smith et al. [Phys. Rev. A 90, 032114 (2014)], we go beyond the commonly used Born-Markov approximations to incorporate system-environment correlations and the resultant non-Markovian dynamical effects. We obtain energy transfer dynamics for both underdamped and overdamped oscillator environments that are in perfect agreement with the numerical hierarchical equations of motion over a wide range of parameters. Furthermore, we show that the Zusman equations, which may be obtained in a semiclassical limit of the reaction coordinate model, are often incapable of describing the correct dynamical behaviour. This demonstrates the necessity of properly accounting for quantum correlations generated between the system and its environment when the Born-Markov approximations no longer hold. Finally, we apply the reaction coordinate formalism to the case of a structured environment comprising of both underdamped (i.e., sharply peaked) and overdamped (broad) components simultaneously. We find that though an enhancement of the dimer energy transfer rate can be obtained when compared to an unstructured environment, its magnitude is rather sensitive to both the dimer-peak resonance conditions and the relative strengths of the underdamped and overdamped contributions. PMID:26827205

15. An evaluation of energy-independent heavy ion transport coefficient approximations

NASA Technical Reports Server (NTRS)

Townsend, L. W.; Wilson, J. W.

1988-01-01

Utilizing a one-dimensional transport theory for heavy ion propagation, evaluations of typical energy-dependent transport coefficient approximations are made by comparing theoretical depth-dose predictions to published experimental values for incident 670 MeV/nucleon Ne-20 beams in water. Results are presented for cases where the input nuclear absorption cross sections, or input fragmentation parameters, or both, are fixed. The lack of fragment charge and mass concentration resulting from the use of Silberberg-Tsao fragmentation parameters continues to be the main source of disagreement between theory and experiment.

16. The appearance of an interval of energies that contain the whole diamagnetic contribution to NMR magnetic shieldings.

PubMed

2007-10-21

Working within relativistic polarization propagator approach, it was shown in a previous article that the electronic origin of diamagnetic contributions to NMR nuclear magnetic shielding, sigmad, are mostly excitations that fit in a well defined interval of energies such that 2mc2interval of energies does not have, in principle, any physical reason to be so well defined, and gives a large amount of the total contribution to sigmad, e.g., close to 98% of it. Then a further study is given in this article, where we show some of the main characteristics of that interval of energy, such as its universal appearance and basis set independence. Our main result is the finding that sigmad is completely described by that interval of excitation energies, i.e., there is no contribution arising from outside of it. Most of the contributions belonging to that interval arise from virtual electronic energies larger than -3mc2. For heavier atoms, there are few contributions from states with virtual negative energies smaller than -3mc2. The model systems under study were noble gases, XH (X=Br, I, and At), XH2 (X=O, S, Se, Te, and Po), XH3 (X=N, P, As, Sb, and Bi); XH4 (X=Sn and Pb), and SnXH3 (X=Br and I). The pattern of contributions of occupied molecular orbitals (MOs) is also shown, where the 1s1/2 is the most important for excitations ending in the bottom half part of the above mentioned interval. On the other hand, the contribution of the other occupied MOs are more important than that of 1s1/2 for the other part of such interval. We also show that sigmad is electron correlation independent within both relativistic and nonrelativistic domain. In the case of sigmap, we find out a clear dependence of electron correlation effects with relativistic effects, which is of the order of 30% for Pb in PbH4. PMID:17949140

17. Proposal for determining the energy content of gravitational waves by using approximate symmetries of differential equations

SciTech Connect

Hussain, Ibrar; Qadir, Asghar; Mahomed, F. M.

2009-06-15

Since gravitational wave spacetimes are time-varying vacuum solutions of Einstein's field equations, there is no unambiguous means to define their energy content. However, Weber and Wheeler had demonstrated that they do impart energy to test particles. There have been various proposals to define the energy content, but they have not met with great success. Here we propose a definition using 'slightly broken' Noether symmetries. We check whether this definition is physically acceptable. The procedure adopted is to appeal to 'approximate symmetries' as defined in Lie analysis and use them in the limit of the exact symmetry holding. A problem is noted with the use of the proposal for plane-fronted gravitational waves. To attain a better understanding of the implications of this proposal we also use an artificially constructed time-varying nonvacuum metric and evaluate its Weyl and stress-energy tensors so as to obtain the gravitational and matter components separately and compare them with the energy content obtained by our proposal. The procedure is also used for cylindrical gravitational wave solutions. The usefulness of the definition is demonstrated by the fact that it leads to a result on whether gravitational waves suffer self-damping.

18. Low-energy parameters of neutron-neutron interaction in the effective-range approximation

SciTech Connect

Babenko, V. A.; Petrov, N. M.

2013-06-15

The effect of the mass difference between the charged and neutral pions on the low-energy parameters of nucleon-nucleon interaction in the {sup 1}S{sub 0} state is studied in the effective-range approximation. On the basis of experimental values of the singlet parameters of neutron-proton scattering and the experimental value of the virtual-state energy for the neutron-neutron systemin the {sup 1}S{sub 0} state, the following values were obtained for the neutron-neutron scattering length and effective range: a{sub nn} = -16.59(117) fm and r{sub nn} = 2.83(11) fm. The calculated values agree well with present-day experimental results.

19. Lattice energies of molecular solids from the random phase approximation with singles corrections.

PubMed

Klimeš, Jiří

2016-09-01

We use the random phase approximation (RPA) method with the singles correlation energy contributions to calculate lattice energies of ten molecular solids. While RPA gives too weak binding, underestimating the reference data by 13.7% on average, much improved results are obtained when the singles are included at the GW singles excitations (GWSE) level, with average absolute difference to the reference data of only 3.7%. Consistently with previous results, we find a very good agreement with the reference data for hydrogen bonded systems, while the binding is too weak for systems where dispersion forces dominate. In fact, the overall accuracy of the RPA+GWSE method is similar to an estimated accuracy of the reference data. PMID:27609003

20. Electron energy spectrum in cylindrical quantum dots and rods: approximation of separation of variables

Nedzinskas, R.; Karpus, V.; Čechavičius, B.; Kavaliauskas, J.; Valušis, G.

2015-06-01

A simple analytical method for electron energy spectrum calculations of cylindrical quantum dots (QDs) and quantum rods (QRs) is presented. The method is based on a replacement of an actual QD or QR hamiltonian with an approximate one, which allows for a separation of variables. Though this approach is known in the literature, it is essentially expanded in the present paper by taking into account a discontinuity of the effective mass, which is of importance in actual semiconductor heterostructures, e.g., InGaAs QDs or QRs embedded in GaAs matrix. Several examples of InGaAs QDs and QRs are considered—their energy spectrum calculations show that the suggested method yields reliable results both for the ground and excited states. The proposed analytical model is verified by numerical calculations, results of which coincide with an accuracy of ∼1 meV.

1. Resonant Interaction, Approximate Symmetry, and Electromagnetic Interaction (EMI) in Low Energy Nuclear Reactions (LENR)

Chubb, Scott

2007-03-01

Only recently (talk by P.A. Mosier-Boss et al, in this session) has it become possible to trigger high energy particle emission and Excess Heat, on demand, in LENR involving PdD. Also, most nuclear physicists are bothered by the fact that the dominant reaction appears to be related to the least common deuteron(d) fusion reaction,d+d ->α+γ. A clear consensus about the underlying effect has also been illusive. One reason for this involves confusion about the approximate (SU2) symmetry: The fact that all d-d fusion reactions conserve isospin has been widely assumed to mean the dynamics is driven by the strong force interaction (SFI), NOT EMI. Thus, most nuclear physicists assume: 1. EMI is static; 2. Dominant reactions have smallest changes in incident kinetic energy (T); and (because of 2), d+d ->α+γ is suppressed. But this assumes a stronger form of SU2 symmetry than is present; d+d ->α+γ reactions are suppressed not because of large changes in T but because the interaction potential involves EMI, is dynamic (not static), the SFI is static, and because the two incident deuterons must have approximate Bose Exchange symmetry and vanishing spin. A generalization of this idea involves a resonant form of reaction, similar to the de-excitation of an atom. These and related (broken gauge) symmetry EMI effects on LENR are discussed.

2. Development of approximate method to analyze the characteristics of latent heat thermal energy storage system

SciTech Connect

Saitoh, T.S.; Hoshi, Akira

1999-07-01

Third Conference of the Parties to the U.N. Framework Convention on Climate Change (COP3) held in last December in Kyoto urged the industrialized nation to reduce carbon dioxide (CO{sub 2}) emissions by 5.2 percent (on the average) below 1990 level until the period between 2008 and 2012 (Kyoto protocol). This implies that even for the most advanced countries like the US, Japan, and EU implementation of drastic policies and overcoming many barriers in market should be necessary. One idea which leads to a path of low carbon intensity is to adopt an energy storage concept. One of the reasons that the efficiency of the conventional energy systems has been relatively low is ascribed to lacking of energy storage subsystem. Most of the past energy systems, for example, air-conditioning system, do not have energy storage part and the system usually operates with low energy efficiency. Firstly, the effect of reducing CO{sub 2} emissions was also examined if the LHTES subsystems were incorporated in all the residential and building air-conditioning systems. Another field of application of the LHTES is of course transportation. Future vehicle will be electric or hybrid vehicle. However, these vehicles will need considerable energy for air-conditioning. The LHTES system will provide enough energy for this purpose by storing nighttime electricity or rejected heat from the radiator or motor. Melting and solidification of phase change material (PCM) in a capsule is of practical importance in latent heat thermal energy storage (LHTES) systems which are considered to be very promising to reduce a peak demand of electricity in the summer season and also reduce carbon dioxide (CO{sub 2}) emissions. Two melting modes are involved in melting in capsules. One is close-contact melting between the solid bulk and the capsule wall, and another is natural convection melting in the liquid (melt) region. Close-contact melting processes for a single enclosure have been solved using several

3. An approximate model and empirical energy function for solute interactions with a water-phosphatidylcholine interface.

PubMed Central

Sanders, C R; Schwonek, J P

1993-01-01

An empirical model of a liquid crystalline (L alpha phase) phosphatidylcholine (PC) bilayer interface is presented along with a function which calculates the position-dependent energy of associated solutes. The model approximates the interface as a gradual two-step transition, the first step being from an aqueous phase to a phase of reduced polarity, but which maintains a high enough concentration of water and/or polar head group moieties to satisfy the hydrogen bond-forming potential of the solute. The second transition is from the hydrogen bonding/low polarity region to an effectively anhydrous hydrocarbon phase. The "interfacial energies" of solutes within this variable medium are calculated based upon atomic positions and atomic parameters describing general polarity and hydrogen bond donor/acceptor propensities. This function was tested for its ability to reproduce experimental water-solvent partitioning energies and water-bilayer partitioning data. In both cases, the experimental data was reproduced fairly well. Energy minimizations carried out on beta-hexyl glucopyranoside led to identification of a global minimum for the interface-associated glycolipid which exhibited glycosidic torsion angles in agreement with prior results (Hare, B.J., K.P. Howard, and J.H. Prestegard. 1993. Biophys. J. 64:392-398). Molecular dynamics simulations carried out upon this same molecule within the simulated interface led to results which were consistent with a number of experimentally based conclusions from previous work, but failed to quantitatively reproduce an available NMR quadrupolar/dipolar coupling data set (Sanders, C.R., and J.H. Prestegard. 1991. J. Am. Chem. Soc. 113:1987-1996). The proposed model and functions are readily incorporated into computational energy modeling algorithms and may prove useful in future studies of membrane-associated molecules. PMID:8241401

4. A method for establishing absolute full-energy peak efficiency and its confidence interval for HPGe detectors

Rizwan, U.; Chester, A.; Domingo, T.; Starosta, K.; Williams, J.; Voss, P.

2015-12-01

A method is proposed for establishing the absolute efficiency calibration of a HPGe detector including the confidence interval in the energy range of 79.6-3451.2 keV. The calibrations were accomplished with the 133Ba, 60Co, 56Co and 152Eu point-like radioactive sources with only the 60Co source being activity calibrated to an accuracy of 2% at the 90% confidence level. All data sets measured from activity calibrated and uncalibrated sources were fit simultaneously using the linearized least squares method. The proposed fit function accounts for scaling of the data taken with activity uncalibrated sources to the data taken with the high accuracy activity calibrated source. The confidence interval for the fit was found analytically using the covariance matrix. Accuracy of the fit was below 3.5% at the 90% confidence level in the 79.6-3451.2 keV energy range.

5. Generalized gradient approximation exchange energy functional with correct asymptotic behavior of the corresponding potential

SciTech Connect

Carmona-Espíndola, Javier; Gázquez, José L.; Vela, Alberto; Trickey, S. B.

2015-02-07

A new non-empirical exchange energy functional of the generalized gradient approximation (GGA) type, which gives an exchange potential with the correct asymptotic behavior, is developed and explored. In combination with the Perdew-Burke-Ernzerhof (PBE) correlation energy functional, the new CAP-PBE (CAP stands for correct asymptotic potential) exchange-correlation functional gives heats of formation, ionization potentials, electron affinities, proton affinities, binding energies of weakly interacting systems, barrier heights for hydrogen and non-hydrogen transfer reactions, bond distances, and harmonic frequencies on standard test sets that are fully competitive with those obtained from other GGA-type functionals that do not have the correct asymptotic exchange potential behavior. Distinct from them, the new functional provides important improvements in quantities dependent upon response functions, e.g., static and dynamic polarizabilities and hyperpolarizabilities. CAP combined with the Lee-Yang-Parr correlation functional gives roughly equivalent results. Consideration of the computed dynamical polarizabilities in the context of the broad spectrum of other properties considered tips the balance to the non-empirical CAP-PBE combination. Intriguingly, these improvements arise primarily from improvements in the highest occupied and lowest unoccupied molecular orbitals, and not from shifts in the associated eigenvalues. Those eigenvalues do not change dramatically with respect to eigenvalues from other GGA-type functionals that do not provide the correct asymptotic behavior of the potential. Unexpected behavior of the potential at intermediate distances from the nucleus explains this unexpected result and indicates a clear route for improvement.

6. Einstein-Maxwell Dirichlet walls, negative kinetic energies, and the adiabatic approximation for extreme black holes

Andrade, Tomás; Kelly, William R.; Marolf, Donald

2015-10-01

The gravitational Dirichlet problem—in which the induced metric is fixed on boundaries at finite distance from the bulk—is related to simple notions of UV cutoffs in gauge/gravity duality and appears in discussions relating the low-energy behavior of gravity to fluid dynamics. We study the Einstein-Maxwell version of this problem, in which the induced Maxwell potential on the wall is also fixed. For flat walls in otherwise asymptotically flat spacetimes, we identify a moduli space of Majumdar-Papapetrou-like static solutions parametrized by the location of an extreme black hole relative to the wall. Such solutions may be described as balancing gravitational repulsion from a negative-mass image source against electrostatic attraction to an oppositely signed image charge. Standard techniques for handling divergences yield a moduli space metric with an eigenvalue that becomes negative near the wall, indicating a region of negative kinetic energy and suggesting that the Hamiltonian may be unbounded below. One may also surround the black hole with an additional (roughly spherical) Dirichlet wall to impose a regulator whose physics is more clear. Negative kinetic energies remain, though new terms do appear in the moduli space metric. The regulator dependence indicates that the adiabatic approximation may be ill-defined for classical extreme black holes with Dirichlet walls.

7. High-intensity interval exercise induces 24-h energy expenditure similar to traditional endurance exercise despite reduced time commitment.

PubMed

Skelly, Lauren E; Andrews, Patricia C; Gillen, Jenna B; Martin, Brian J; Percival, Michael E; Gibala, Martin J

2014-07-01

Subjects performed high-intensity interval training (HIIT) and continuous moderate-intensity training (END) to evaluate 24-h oxygen consumption. Oxygen consumption during HIIT was lower versus END; however, total oxygen consumption over 24 h was similar. These data demonstrate that HIIT and END induce similar 24-h energy expenditure, which may explain the comparable changes in body composition reported despite lower total training volume and time commitment. PMID:24773393

8. Nuclear energy surfaces at high-spin in the A{approximately}180 mass region

SciTech Connect

Chasman, R.R.; Egido, J.L.; Robledo, L.M.

1995-08-01

We are studying nuclear energy surfaces at high spin, with an emphasis on very deformed shapes using two complementary methods: (1) the Strutinsky method for making surveys of mass regions and (2) Hartree-Fock calculations using a Gogny interaction to study specific nuclei that appear to be particularly interesting from the Strutinsky method calculations. The great advantage of the Strutinsky method is that one can study the energy surfaces of many nuclides ({approximately}300) with a single set of calculations. Although the Hartree-Fock calculations are quite time-consuming relative to the Strutinsky calculations, they determine the shape at a minimum without being limited to a few deformation modes. We completed a study of {sup 182}Os using both approaches. In our cranked Strutinsky calculations, which incorporate a necking mode deformation in addition to quadrupole and hexadecapole deformations, we found three well-separated, deep, strongly deformed minima. The first is characterized by nuclear shapes with axis ratios of 1.5:1; the second by axis ratios of 2.2:1 and the third by axis ratios of 2.9:1. We also studied this nuclide with the density-dependent Gogny interaction at I = 60 using the Hartree-Fock method and found minima characterized by shapes with axis ratios of 1.5:1 and 2.2:1. A comparison of the shapes at these minima, generated in the two calculations, shows that the necking mode of deformation is extremely useful for generating nuclear shapes at large deformation that minimize the energy. The Hartree-Fock calculations are being extended to larger deformations in order to further explore the energy surface in the region of the 2.9:1 minimum.

9. Discrete Dipole Approximation for Low-Energy Photoelectron Emission from NaCl Nanoparticles

SciTech Connect

Berg, Matthew J.; Wilson, Kevin R.; Sorensen, Chris; Chakrabarti, Amit; Ahmed, Musahid

2011-09-22

This work presents a model for the photoemission of electrons from sodium chloride nanoparticles 50-500 nm in size, illuminated by vacuum ultraviolet light with energy ranging from 9.4-10.9 eV. The discrete dipole approximation is used to calculate the electromagnetic field inside the particles, from which the two-dimensional angular distribution of emitted electrons is simulated. The emission is found to favor the particle?s geometrically illuminated side, and this asymmetry is compared to previous measurements performed at the Lawrence Berkeley National Laboratory. By modeling the nanoparticles as spheres, the Berkeley group is able to semi-quantitatively account for the observed asymmetry. Here however, the particles are modeled as cubes, which is closer to their actual shape, and the interaction of an emitted electron with the particle surface is also considered. The end result shows that the emission asymmetry for these low-energy electrons is more sensitive to the particle-surface interaction than to the specific particle shape, i.e., a sphere or cube.

10. Multi-term approximation to the Boltzmann transport equation for electron energy distribution functions in nitrogen

Feng, Yue

Plasma is currently a hot topic and it has many significant applications due to its composition of both positively and negatively charged particles. The energy distribution function is important in plasma science since it characterizes the ability of the plasma to affect chemical reactions, affect physical outcomes, and drive various applications. The Boltzmann Transport Equation is an important kinetic equation that provides an accurate basis for characterizing the distribution function---both in energy and space. This dissertation research proposes a multi-term approximation to solve the Boltzmann Transport Equation by treating the relaxation process using an expansion of the electron distribution function in Legendre polynomials. The elastic and 29 inelastic cross sections for electron collisions with nitrogen molecules (N2) and singly ionized nitrogen molecules ( N+2 ) have been used in this application of the Boltzmann Transport Equation. Different numerical methods have been considered to compare the results. The numerical methods discussed in this thesis are the implicit time-independent method, the time-dependent Euler method, the time-dependent Runge-Kutta method, and finally the implicit time-dependent relaxation method by generating the 4-way grid with a matrix solver. The results show that the implicit time-dependent relaxation method is the most accurate and stable method for obtaining reliable results. The results were observed to match with the published experimental data rather well.

11. Interval Training.

ERIC Educational Resources Information Center

President's Council on Physical Fitness and Sports, Washington, DC.

Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

12. Singlet-triplet energy gaps for diradicals from particle-particle random phase approximation.

PubMed

Yang, Yang; Peng, Degao; Davidson, Ernest R; Yang, Weitao

2015-05-21

13. Quick benefits of interval training versus continuous training on bone: a dual-energy X-ray absorptiometry comparative study.

PubMed

Boudenot, Arnaud; Maurel, Delphine B; Pallu, Stéphane; Ingrand, Isabelle; Boisseau, Nathalie; Jaffré, Christelle; Portier, Hugues

2015-12-01

To delay age-related bone loss, physical activity is recommended during growth. However, it is unknown whether interval training is more efficient than continuous training to increase bone mass both quickly and to a greater extent. The aim of this study was to compare the effects of a 10-week interval training regime with a 14-week continuous training regime on bone mineral density (BMD). Forty-four male Wistar rats (8 weeks old) were separated into four groups: control for 10 weeks (C10), control for 14 weeks (C14), moderate interval training for 10 weeks (IT) and moderate continuous training for 14 weeks (CT). Rats were exercised 1 h/day, 5 day/week. Body composition and BMD of the whole body and femur respectively were assessed by dual-energy X-ray absorptiometry at baseline and after training to determine raw gain and weight-normalized BMD gain. Both trained groups had lower weight and fat mass gain when compared to controls. Both trained groups gained more BMD compared to controls when normalized to body weight. Using a 30% shorter training period, the IT group showed more than 20% higher whole body and femur BMD gains compared to the CT. Our data suggest that moderate IT was able to produce faster bone adaptations than moderate CT. PMID:26754273

14. Severity evaluation of the transverse crack in a cylindrical part using a PZT wafer based on an interval energy approach

Xiao, Han; Zheng, Jiajia; Song, Gangbing

2016-03-01

Transverse cracks in cylindrical parts can be detected by using the ultrasound based pulse-echo method, which has been widely used in industrial applications. However, it is still a challenge to identify the echoes reflected by a crack and bottom surfaces of a cylindrical part due to the multi-path propagation and wave mode conversion. In this paper, an interval energy approach is proposed to evaluate the severity of the transverse crack in a cylindrical part. Lead zirconate titanate patch transducers are used to generate the ultrasound pulse and to detect the echoes. The echo signals are preprocessed and divided into two zones, the normal reflection zone and the crack reflection zone. Two energy factors evaluating the severity of the crack are computed based on the interval energy. When using this proposed method, it is not necessary to identify the echo sources since all the crack and boundary echoes are automatically taken into consideration by using the proposed method. The experimental results indicate that proposed approach is more suitable and sensitive to evaluate the transverse crack severity of cylindrical part than the traditional method.

15. Interval neural networks

SciTech Connect

Patil, R.B.

1995-05-01

Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

16. Excitation energies from particle-particle random phase approximation: Davidson algorithm and benchmark studies.

PubMed

Yang, Yang; Peng, Degao; Lu, Jianfeng; Yang, Weitao

2014-09-28

The particle-particle random phase approximation (pp-RPA) has been used to investigate excitation problems in our recent paper [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. It has been shown to be capable of describing double, Rydberg, and charge transfer excitations, which are challenging for conventional time-dependent density functional theory (TDDFT). However, its performance on larger molecules is unknown as a result of its expensive O(N(6)) scaling. In this article, we derive and implement a Davidson iterative algorithm for the pp-RPA to calculate the lowest few excitations for large systems. The formal scaling is reduced to O(N(4)), which is comparable with the commonly used configuration interaction singles (CIS) and TDDFT methods. With this iterative algorithm, we carried out benchmark tests on molecules that are significantly larger than the molecules in our previous paper with a reasonably large basis set. Despite some self-consistent field convergence problems with ground state calculations of (N - 2)-electron systems, we are able to accurately capture lowest few excitations for systems with converged calculations. Compared to CIS and TDDFT, there is no systematic bias for the pp-RPA with the mean signed error close to zero. The mean absolute error of pp-RPA with B3LYP or PBE references is similar to that of TDDFT, which suggests that the pp-RPA is a comparable method to TDDFT for large molecules. Moreover, excitations with relatively large non-HOMO excitation contributions are also well described in terms of excitation energies, as long as there is also a relatively large HOMO excitation contribution. These findings, in conjunction with the capability of pp-RPA for describing challenging excitations shown earlier, further demonstrate the potential of pp-RPA as a reliable and general method to describe excitations, and to be a good alternative to TDDFT methods. PMID:25273409

17. Calculation of intermediate-energy electron-impact ionization of molecular hydrogen and nitrogen using the paraxial approximation

SciTech Connect

2011-12-15

We have implemented the paraxial approximation followed by the time-dependent Hartree-Fock method with a frozen core for the single impact ionization of atoms and two-atomic molecules. It reduces the original scattering problem to the solution of a five-dimensional time-dependent Schroedinger equation. Using this method, we calculated the multifold differential cross section of the impact single ionization of the helium atom, the hydrogen molecule, and the nitrogen molecule from the impact of intermediate-energy electrons. Our results for He and H{sub 2} are quite close to the experimental data. Surprisingly, for N{sub 2} the agreement is good for the paraxial approximation combined with first Born approximation but worse for pure paraxial approximation, apparently because of the insufficiency of the frozen-core approximation.

18. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

SciTech Connect

Alemi, Mallory; Loring, Roger F.

2015-06-07

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes.

19. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

Alemi, Mallory; Loring, Roger F.

2015-06-01

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes.

20. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

PubMed Central

Alemi, Mallory; Loring, Roger F.

2015-01-01

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes. PMID:26049437

1. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

SciTech Connect

Singh, Kunwar Pal; Arya, Rashmi; Malik, Anil K.

2015-09-14

We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarized laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.

2. Rapid approximate calculation of water binding free energies in the whole hydration domain of (bio)macromolecules.

PubMed

Reif, Maria M; Zacharias, Martin

2016-07-01

The evaluation of water binding free energies around solute molecules is important for the thermodynamic characterization of hydration or association processes. Here, a rapid approximate method to estimate water binding free energies around (bio)macromolecules from a single molecular dynamics simulation is presented. The basic idea is that endpoint free-energy calculation methods are applied and the endpoint quantities are monitored on a three-dimensional grid around the solute. Thus, a gridded map of water binding free energies around the solute is obtained, that is, from a single short simulation, a map of favorable and unfavorable water binding sites can be constructed. Among the employed free-energy calculation methods, approaches involving endpoint information pertaining to actual thermodynamic integration calculations or endpoint information as exploited in the linear interaction energy method were examined. The accuracy of the approximate approaches was evaluated on the hydration of a cage-like molecule representing either a nonpolar, polar, or charged water binding site and on α- and β-cyclodextrin molecules. Among the tested approaches, the linear interaction energy method is considered the most viable approach. Applying the linear interaction energy method on the grid around the solute, a semi-quantitative thermodynamic characterization of hydration around the whole solute is obtained. Disadvantages are the approximate nature of the method and a limited flexibility of the solute. © 2016 Wiley Periodicals, Inc. PMID:27185199

3. Status of the Brueckner-Hartree-Fock approximation to the nuclear matter binding energy with the Paris potential

SciTech Connect

Schulze, H.; Cugnon, J.; Lejeune, A.; Baldo, M.; Lombardo, U.

1995-11-01

A new calculation of the binding energy of nuclear matter in the Brueckner-Hartree-Fock approximation with the Paris potential using the standard and continuous choices of single particle energies is presented, paying special attention to the numerical accuracy and higher partial waves. Comparison with other calculations is made and the accuracy of the state of the art for the Brueckner-Hartree-Fock method is assessed.

4. Casimir bag energy in the stochastic approximation to the pure QCD vacuum

SciTech Connect

Fosco, C. D.; Oxman, L. E.

2007-01-15

We study the Casimir contribution to the bag energy coming from gluon field fluctuations, within the context of the stochastic vacuum model of pure QCD. After formulating the problem in terms of the generating functional of field strength cumulants, we argue that the resulting predictions about the Casimir energy are compatible with the phenomenologically required bag energy term.

5. High-intensity interval training, solutions to the programming puzzle. Part II: anaerobic energy, neuromuscular load and practical applications.

PubMed

Buchheit, Martin; Laursen, Paul B

2013-10-01

High-intensity interval training (HIT) is a well-known, time-efficient training method for improving cardiorespiratory and metabolic function and, in turn, physical performance in athletes. HIT involves repeated short (<45 s) to long (2-4 min) bouts of rather high-intensity exercise interspersed with recovery periods (refer to the previously published first part of this review). While athletes have used 'classical' HIT formats for nearly a century (e.g. repetitions of 30 s of exercise interspersed with 30 s of rest, or 2-4-min interval repetitions ran at high but still submaximal intensities), there is today a surge of research interest focused on examining the effects of short sprints and all-out efforts, both in the field and in the laboratory. Prescription of HIT consists of the manipulation of at least nine variables (e.g. work interval intensity and duration, relief interval intensity and duration, exercise modality, number of repetitions, number of series, between-series recovery duration and intensity); any of which has a likely effect on the acute physiological response. Manipulating HIT appropriately is important, not only with respect to the expected middle- to long-term physiological and performance adaptations, but also to maximize daily and/or weekly training periodization. Cardiopulmonary responses are typically the first variables to consider when programming HIT (refer to Part I). However, anaerobic glycolytic energy contribution and neuromuscular load should also be considered to maximize the training outcome. Contrasting HIT formats that elicit similar (and maximal) cardiorespiratory responses have been associated with distinctly different anaerobic energy contributions. The high locomotor speed/power requirements of HIT (i.e. ≥95 % of the minimal velocity/power that elicits maximal oxygen uptake [v/p(·)VO(2max)] to 100 % of maximal sprinting speed or power) and the accumulation of high-training volumes at high-exercise intensity (runners can

6. Interbirth intervals

PubMed Central

Haig, David

2014-01-01

Background and objectives: Interbirth intervals (IBIs) mediate a trade-off between child number and child survival. Life history theory predicts that the evolutionarily optimal IBI differs for different individuals whose fitness is affected by how closely a mother spaces her children. The objective of the article is to clarify these conflicts and explore their implications for public health. Methodology: Simple models of inclusive fitness and kin conflict address the evolution of human birth-spacing. Results: Genes of infants generally favor longer intervals than genes of mothers, and infant genes of paternal origin generally favor longer IBIs than genes of maternal origin. Conclusions and implications: The colonization of maternal bodies by offspring cells (fetal microchimerism) raises the possibility that cells of older offspring could extend IBIs by interfering with the implantation of subsequent embryos. PMID:24480612

7. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations

Malshe, M.; Narulkar, R.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Agrawal, P. M.; Komanduri, R.

2009-05-01

A general method for the development of potential-energy hypersurfaces is presented. The method combines a many-body expansion to represent the potential-energy surface with two-layer neural networks (NN) for each M-body term in the summations. The total number of NNs required is significantly reduced by employing a moiety energy approximation. An algorithm is presented that efficiently adjusts all the coupled NN parameters to the database for the surface. Application of the method to four different systems of increasing complexity shows that the fitting accuracy of the method is good to excellent. For some cases, it exceeds that available by other methods currently in literature. The method is illustrated by fitting large databases of ab initio energies for Sin(n =3,4,…,7) clusters obtained from density functional theory calculations and for vinyl bromide (C2H3Br) and all products for dissociation into six open reaction channels (12 if the reverse reactions are counted as separate open channels) that include C-H and C-Br bond scissions, three-center HBr dissociation, and three-center H2 dissociation. The vinyl bromide database comprises the ab initio energies of 71 969 configurations computed at MP4(SDQ) level with a 6-31G(d,p) basis set for the carbon and hydrogen atoms and Huzinaga's (4333/433/4) basis set augmented with split outer s and p orbitals (43321/4321/4) and a polarization f orbital with an exponent of 0.5 for the bromine atom. It is found that an expansion truncated after the three-body terms is sufficient to fit the Si5 system with a mean absolute testing set error of 5.693×10-4 eV. Expansions truncated after the four-body terms for Sin(n =3,4,5) and Sin(n =3,4,…,7) provide fits whose mean absolute testing set errors are 0.0056 and 0.0212 eV, respectively. For vinyl bromide, a many-body expansion truncated after the four-body terms provides fitting accuracy with mean absolute testing set errors that range between 0.0782 and 0.0808 eV. These

8. Exact and approximate expressions of energy generation rates and their impact on the explosion properties of pair instability supernovae

Takahashi, Koh; Yoshida, Takashi; Umeda, Hideyuki; Sumiyoshi, Kohsuke; Yamada, Shoichi

2016-02-01

Energetics of nuclear reaction is fundamentally important to understand the mechanism of pair instability supernovae (PISNe). Based on the hydrodynamic equations and thermodynamic relations, we derive exact expressions for energy conservation suitable to be solved in simulation. We also show that some formulae commonly used in the literature are obtained as approximations of the exact expressions. We simulate the evolution of very massive stars of ˜100-320 M⊙ with zero- and 1/10 Z⊙, and calculate further explosions as PISNe, applying each of the exact and approximate formulae. The calculations demonstrate that the explosion properties of PISN, such as the mass range, the 56Ni yield, and the explosion energy, are significantly affected by applying the different energy generation rates. We discuss how these results affect the estimate of the PISN detection rate, which depends on the theoretical predictions of such explosion properties.

9. Interplay Between Condensation Energy, Pseudogap, and the Specific Heat of a Hubbard Model in a $n$ n -Pole Approximation

Lausmann, A. C.; Calegari, E. J.; Magalhaes, S. G.; Chaves, C. M.; Troper, A.

2015-04-01

The condensation energy and the specific heat jump of a two-dimensional Hubbard model, suitable to discuss high- superconductors, are studied. In this work, the Hubbard model is investigated by the Green's function method within a -pole approximation, which allows to consider superconductivity with -wave pairing. In the present scenario, the pseudogap regime emerges when the antiferromagnetic correlations become sufficiently strong to move to lower energies the region around of the nodal point on the renormalized bands. It is observed that above a given total occupation , the specific heat jump and also the condensation energy decrease signaling the presence of the pseudogap.

10. Vibration energy harvesting from a nonlinear standing beam-mass system using a two-mode approximation

Lajimi, S. A. M.; Friswell, M. I.

2015-04-01

For a nonlinear beam-mass system used to harvest vibratory energy, the two-mode approximation of the response is computed and compared to the single-mode approximation of the response. To this end, the discretized equations of generalized coordinates are developed and studied using a computational method. By obtaining phase-portraits and time-histories of the displacement and voltage, it is shown that the strong nonlinearity of the system affects the system dynamics considerably. By comparing the results of single- and two-mode approximations, it is shown that the number of mode shapes affects the dynamics of the response. Varying the tip-mass results in different structural configurations namely linear, pre-buckled nonlinear, and post-buckled nonlinear configurations. The nonlinear dynamics of the system response are investigated for vibrations about static equilibrium points arising from the buckling of the beam. Furthermore, it is demonstrated that the harvested power is affected by the system configuration.

11. Post-mortem interval estimation of human skeletal remains by micro-computed tomography, mid-infrared microscopic imaging and energy dispersive X-ray mapping

PubMed Central

Hatzer-Grubwieser, P.; Bauer, C.; Parson, W.; Unterberger, S. H.; Kuhn, V.; Pemberger, N.; Pallua, Anton K.; Recheis, W.; Lackner, R.; Stalder, R.; Pallua, J. D.

2015-01-01

In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach. PMID:25878731

12. Differences among breed crosses of cattle in the conversion of food energy to calf weight during the preweaning interval.

PubMed

Jenkins, T G; Cundiff, L V; Ferrell, C L

1991-07-01

The objective of this study was to determine whether F1 cows that differ in genetic potential for weight at maturity and milk yield vary in the conversion of food energy to calf weight gain. Food intakes and weight change data were recorded by pen for cows and calves from approximately 45 d postpartum. Cows assigned to the study were 7- to 9-yr-old F1s produced by top-crossing Angus, Hereford, Brown Swiss, Chianina, Gelbvieh; Maine Anjou, and Red Poll sires to either Angus or Hereford dams. Calves were sired by Simmentals. Experimental units were pens (10 to 12 cow/calf pairs); pen was replicated within breed of sire in each of 2 yr (n = 24). Calf weight gain and energy consumed by the dams differed among the F1s, as did the ratio of calf weight gain to energy consumed by the calf and cow. Angus or Hereford (35.8), Red Poll (35.7), or Maine Anjou (35.6) F1s produced more calf weight per unit of energy consumed (g/Mcal) by the cow and calf than Chianina (33.1) or Gelbvieh (33.7) F1 females; Brown Swiss cows were intermediate (34.3). Differences in food conversion efficiency exist among breed crosses. These differences seem to be associated with breed cross differences in genetic potential for milk yield and mature weight; an exception to this trend was the Maine Anjou. PMID:1885388

13. Folding funnels and energy landscapes of larger proteins within the capillarity approximation

PubMed Central

Wolynes, Peter G.

1997-01-01

The characterization of protein-folding kinetics with increasing chain length under various thermodynamic conditions is addressed using the capillarity picture in which distinct spatial regions of the protein are imagined to be folded or trapped and separated by interfaces. The quantitative capillarity theory is based on the nucleation theory of first-order transitions and the droplet analysis of glasses and random magnets. The concepts of folding funnels and rugged energy landscapes are shown to be applicable in the large size limit just as for smaller proteins. An ideal asymptotic free-energy profile as a function of a reaction coordinate measuring progress down the funnel is shown to be quite broad. This renders traditional transition state theory generally inapplicable but allows a diffusive picture with a transition-state region to be used. The analysis unifies several scaling arguments proposed earlier. The importance of fluctuational fine structure both to the free-energy profile and to the glassy dynamics is highlighted. The fluctuation effects lead to a very broad trapping-time distribution. Considerations necessary for understanding the crossover between the mean field and capillarity pictures of the energy landscapes are discussed. A variety of mechanisms that may roughen the interfaces and may lead to a complex structure of the transition-state ensemble are proposed. PMID:9177189

14. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

SciTech Connect

Krause, Katharina; Klopper, Wim

2013-11-21

Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn–Sham calculation accounting for spin–orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn–Sham calculations.

15. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

Krause, Katharina; Klopper, Wim

2013-11-01

Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn-Sham calculation accounting for spin-orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn-Sham calculations.

16. Iterative and direct methods employing distributed approximating functionals for the reconstruction of a potential energy surface from its sampled values

Szalay, Viktor

1999-11-01

The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.

17. Four-body corrected first Born approximation for single charge exchange at high impact energies

Mančev, Ivan

1995-06-01

Single electron capture is investigated by means of the four-body boundary corrected first Born approximation (CB1-4B). The "post" form of the transition amplitude for a general heteronuclear case (Zp; e1) + (ZT; e2) → (Zp; e1, e2) + ZT is derived in the form of readily obtainable two-dimensional real integrals. We investigate the sensitivity of the total cross sections to the choice of ground state wave function for helium-like atoms. Also, the influence of non-captured electron on the final results is studied. As an illustration, the CB1-4B method is used to compute the total cross sections for reactions: H(1s) + H(1s) → H-(1s2) + H+, He+(1s) + H(1s) → He(1s2) + H+ and He+(1s) + He+(1s) → He(1s2) + α. The theoretical cross sections are found to be in good agreement with the available experimental data.

18. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

PubMed

Donahue, William; Newhauser, Wayne D; Ziegler, James F

2016-09-01

Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity. PMID:27530803

19. One-parameter optimization of a nonempirical meta-generalized-gradient-approximation for the exchange-correlation energy

SciTech Connect

Perdew, John P.; Ruzsinszky, Adrienn; Tao, Jianmin; Csonka, Gabor I.; Scuseria, Gustavo E.

2007-10-15

The meta-generalized-gradient-approximation (meta-GGA) for the exchange-correlation energy, as constructed by Tao, Perdew, Staroverov, and Scuseria (TPSS) [Phys. Rev. Lett. 91, 146401 (2003)], has achieved usefully consistent accuracy for diverse systems and is the most reliable nonempirical density functional (and the most reliable nonhybrid) in common use. We present here an optimized version of this TPSS functional obtained by empirically fitting a single free parameter that controls the approach of the exchange enhancement factor to its rapidly-varying-density limit, while preserving all the exact constraints that the original TPSS functional satisfies. We find that molecular atomization energies are significantly improved with the optimized version and are even better than those obtained with the best hybrid functionals employing a fraction of exact exchange (e.g., the TPSS hybrid), while energy barrier heights are slightly improved; jellium surface energies remain accurate and almost unchanged. The one-parameter freedom of the TPSS functional may be useful even beyond the meta-GGA level, since the TPSS approximation is a natural starting point for the higher-level hyper-GGA.

20. Interfacial tension and wall energy of a Bose-Einstein condensate binary mixture: Triple-parabola approximation

Deng, Zehui; Schaeybroeck, Bert Van; Lin, Chang-You; Thu, Nguyen Van; Indekeu, Joseph O.

2016-02-01

Accurate and useful analytic approximations are developed for order parameter profiles and interfacial tensions of phase-separated binary mixtures of Bose-Einstein condensates. The pure condensates 1 and 2, each of which contains a particular species of atoms, feature healing lengths ξ1 and ξ2. The inter-atomic interactions are repulsive. In particular, the reduced inter-species repulsive interaction strength is K. A triple-parabola approximation (TPA) is proposed, to represent closely the energy density featured in Gross-Pitaevskii (GP) theory. This TPA allows us to define a model, which is a handy alternative to the full GP theory, while still possessing a simple analytic solution. The TPA offers a significant improvement over the recently introduced double-parabola approximation (DPA). In particular, a more accurate amplitude for the wall energy (of a single condensate) is derived and, importantly, a more correct expression for the interfacial tension (of two condensates) is obtained, which describes better its dependence on K in the strong segregation regime, while also the interface profiles undergo a qualitative improvement.

1. Lateral distribution of high energy muons in EAS of sizes Ne approximately equals 10(5) and Ne approximately equals 10(6)

NASA Technical Reports Server (NTRS)

Bazhutov, Y. N.; Ermakov, G. G.; Fomin, G. G.; Isaev, V. I.; Jarochkina, Z. V.; Kalmykov, N. N.; Khrenov, B. A.; Khristiansen, G. B.; Kulikov, G. V.; Motova, M. V.

1985-01-01

Muon energy spectra and muon lateral distribution in EAS were investigated with the underground magnetic spectrometer working as a part of the extensive air showers (EAS) array. For every registered muon the data on EAS are analyzed and the following EAS parameters are obtained, size N sub e, distance r from the shower axis to muon, age parameter s. The number of muons with energy over some threshold E associated to EAS of fixed parameters are measured, I sub reg. To obtain traditional characteristics, muon flux densities as a function of the distance r and muon energy E, muon lateral distribution and energy spectra are discussed for hadron-nucleus interaction model and composition of primary cosmic rays.

2. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

PubMed

2016-09-01

A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design. PMID:27501066

3. Brownian motors in the low-energy approximation: Classification and properties

SciTech Connect

Rozenbaum, V. M.

2010-04-15

We classify Brownian motors based on the expansion of their velocity in terms of the reciprocal friction coefficient. The two main classes of motors (with dichotomic fluctuations in homogeneous force and periodic potential energy) are characterized by different analytical dependences of their mean velocity on the spatial and temporal asymmetry coefficients and by different adiabatic limits. The competition between the spatial and temporal asymmetries gives rise to stopping points. The transition through these points can be achieved by varying the asymmetry coefficients, temperature, and other motor parameters, which can be used, for example, for nanoparticle segregation. The proposed classification separates out a new type of motors based on synchronous fluctuations in symmetric potential and applied homogeneous force. As an example of this type of motors, we consider a near-surface motor whose two-dimensional motion (parallel and perpendicular to the substrate plane) results from fluctuations in external force inclined to the surface.

4. Rigorous and unifying physical interpretation of the exchange potential and energy in the local-density approximation

Slamet, Marlina; Sahni, Viraht

1992-02-01

In this paper we explain that the exchange potential and energy in the local-density approximation (LDA) of density-functional theory has a IrigorousP and IunifiedP physical interpretation founded in the work of Harbola and Sahni. Accordingly, the IsourceP charge distribution that gives rise to both the LDA exchange (path-independent) potential IandP energy is the Fermi hole as derived in the gradient-expansion approximation (GEA) to O(∇). Thus, the LDA exchange potential, or equivalently the functional derivative of the LDA exchange-energy functional of the density, is the work required to bring an electron from infinity to its position at r against the force field of this charge distribution. The LDA exchange energy in turn is the energy of interaction between the electronic density and this charge. However, it is the non-spherically-symmetric component of the source charge that gives rise to the potential but its spherically symmetric component that contributes to the energy. Since the underlying physics of the LDA for exchange lies in its source charge, we next determine the structure of the GEA Fermi hole to O(∇) for the nonuniform electronic system in atoms and at metallic surfaces. A study of this structure as a function of electron position shows that the errors in the LDA arise because the source charge does not in general reproduce accurately the structure of the exact Fermi hole, that it violates the quantum-mechanical requirement of positivity, and further that it oscillates, albeit with decaying amplitude, far into the classically forbidden region.

5. Detectability of auditory signals presented without defined observation intervals

NASA Technical Reports Server (NTRS)

Watson, C. S.; Nichols, T. L.

1976-01-01

Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

6. High-Intensity Interval Resistance Training (HIRT) influences resting energy expenditure and respiratory ratio in non-dieting individuals

PubMed Central

2012-01-01

Background The benefits of exercise are well established but one major barrier for many is time. It has been proposed that short period resistance training (RT) could play a role in weight control by increasing resting energy expenditure (REE) but the effects of different kinds of RT has not been widely reported. Methods We tested the acute effects of high-intensity interval resistance training (HIRT) vs. traditional resistance training (TT) on REE and respiratory ratio (RR) at 22 hours post-exercise. In two separate sessions, seventeen trained males carried out HIRT and TT protocols. The HIRT technique consists of: 6 repetitions, 20 seconds rest, 2/3 repetitions, 20 secs rest, 2/3 repetitions with 2′30″ rest between sets, three exercises for a total of 7 sets. TT consisted of eight exercises of 4 sets of 8–12 repetitions with one/two minutes rest with a total amount of 32 sets. We measured basal REE and RR (TT0 and HIRT0) and 22 hours after the training session (TT22 and HIRT22). Results HIRT showed a greater significant increase (p < 0.001) in REE at 22 hours compared to TT (HIRT22 2362 ± 118 Kcal/d vs TT22 1999 ± 88 Kcal/d). RR at HIRT22 was significantly lower (0.798 ± 0.010) compared to both HIRT0 (0.827 ± 0.006) and TT22 (0.822 ± 0.008). Conclusions Our data suggest that shorter HIRT sessions may increase REE after exercise to a greater extent than TT and may reduce RR hence improving fat oxidation. The shorter exercise time commitment may help to reduce one major barrier to exercise. PMID:23176325

7. Exchange energy gradients with respect to atomic positions and cell parameters within the Hartree-Fock Gamma-point approximation.

PubMed

Weber, Valéry; Daul, Claude; Challacombe, Matt

2006-06-01

Recently, linear scaling construction of the periodic exact Hartree-Fock exchange matrix within the Gamma-point approximation has been introduced [J. Chem. Phys. 122, 124105 (2005)]. In this article, a formalism for evaluation of analytical Hartree-Fock exchange energy gradients with respect to atomic positions and cell parameters at the Gamma-point approximation is presented. While the evaluation of exchange gradients with respect to atomic positions is similar to those in the gas phase limit, the gradients with respect to cell parameters involve the accumulation of atomic gradients multiplied by appropriate factors and a modified electron repulsion integral (ERI). This latter integral arises from use of the minimum image convention in the definition of the Gamma-point Hartree-Fock approximation. We demonstrate how this new ERI can be computed with the help of a modified vertical recurrence relation in the frame of the Obara-Saika and Head-Gordon-Pople algorithm. As an illustration, the analytical gradients have been used in conjunction with the QUICCA algorithm [K. Nemeth and M. Challacombe, J. Chem. Phys. 121, 2877 (2004)] to optimize periodic systems at the Hartree-Fock level of theory. PMID:16774396

8. Exchange energy gradients with respect to atomic positions and cell parameters within the Hartree-Fock Γ-point approximation

Weber, Valéry; Daul, Claude; Challacombe, Matt

2006-06-01

Recently, linear scaling construction of the periodic exact Hartree-Fock exchange matrix within the Γ-point approximation has been introduced [J. Chem. Phys. 122, 124105 (2005)]. In this article, a formalism for evaluation of analytical Hartree-Fock exchange energy gradients with respect to atomic positions and cell parameters at the Γ-point approximation is presented. While the evaluation of exchange gradients with respect to atomic positions is similar to those in the gas phase limit, the gradients with respect to cell parameters involve the accumulation of atomic gradients multiplied by appropriate factors and a modified electron repulsion integral (ERI). This latter integral arises from use of the minimum image convention in the definition of the Γ-point Hartree-Fock approximation. We demonstrate how this new ERI can be computed with the help of a modified vertical recurrence relation in the frame of the Obara-Saika and Head-Gordon-Pople algorithm. As an illustration, the analytical gradients have been used in conjunction with the QUICCA algorithm [K. Németh and M. Challacombe, J. Chem. Phys. 121, 2877 (2004)] to optimize periodic systems at the Hartree-Fock level of theory.

9. Analytic evaluation of the electronic self-energy in the GW approximation for two electrons on a sphere

Schindlmayr, Arno

2013-02-01

The GW approximation for the electronic self-energy is an important tool for the quantitative prediction of excited states in solids, but its mathematical exploration is hampered by the fact that it must, in general, be evaluated numerically even for very simple systems. In this paper I describe a nontrivial model consisting of two electrons on the surface of a sphere, interacting with the normal long-range Coulomb potential, and show that the GW self-energy, in the absence of self-consistency, can in fact be derived completely analytically in this case. The resulting expression is subsequently used to analyze the convergence of the energy gap between the highest occupied and the lowest unoccupied quasiparticle orbital with respect to the total number of states included in the spectral summations. The asymptotic formula for the truncation error obtained in this way, whose dominant contribution is proportional to the cutoff energy to the power -3/2, may be adapted to extrapolate energy gaps in other systems.

10. Calculation of Electrochemical Energy Levels in Water Using the Random Phase Approximation and a Double Hybrid Functional

Cheng, Jun; VandeVondele, Joost

2016-02-01

Understanding charge transfer at electrochemical interfaces requires consistent treatment of electronic energy levels in solids and in water at the same level of the electronic structure theory. Using density-functional-theory-based molecular dynamics and thermodynamic integration, the free energy levels of six redox couples in water are calculated at the level of the random phase approximation and a double hybrid density functional. The redox levels, together with the water band positions, are aligned against a computational standard hydrogen electrode, allowing for critical analysis of errors compared to the experiment. It is encouraging that both methods offer a good description of the electronic structures of the solutes and water, showing promise for a full treatment of electrochemical interfaces.

11. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow

NASA Technical Reports Server (NTRS)

Shirts, R. B.; Reinhardt, W. P.

1982-01-01

Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.

12. Calculation of Electrochemical Energy Levels in Water Using the Random Phase Approximation and a Double Hybrid Functional.

PubMed

Cheng, Jun; VandeVondele, Joost

2016-02-26

Understanding charge transfer at electrochemical interfaces requires consistent treatment of electronic energy levels in solids and in water at the same level of the electronic structure theory. Using density-functional-theory-based molecular dynamics and thermodynamic integration, the free energy levels of six redox couples in water are calculated at the level of the random phase approximation and a double hybrid density functional. The redox levels, together with the water band positions, are aligned against a computational standard hydrogen electrode, allowing for critical analysis of errors compared to the experiment. It is encouraging that both methods offer a good description of the electronic structures of the solutes and water, showing promise for a full treatment of electrochemical interfaces. PMID:26967430

13. Approximation of properties of hyperelastic materials with use of energy-based models and biaxial tension data

Jamróz, Weronika

2016-06-01

The paper shows the way enrgy-based models aproximate mechanical properties of hiperelastic materials. Main goal of research was to create a method of finding a set of material constants that are included in a strain energy function that constitutes a heart of an energy-based model. The most optimal set of material constants determines the best adjustment of a theoretical stress-strain relation to the experimental one. This kind of adjustment enables better prediction of behaviour of a chosen material. In order to obtain more precised solution the approximation was made with use of data obtained in a modern experiment widely describen in [1]. To save computation time main algorithm is based on genetic algorithms.

14. Molecular Excitation Energies from Time-Dependent Density Functional Theory Employing Random-Phase Approximation Hessians with Exact Exchange.

PubMed

Heßelmann, Andreas

2015-04-14

Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations. PMID:26574370

15. Interval estimates and their precision

Marek, Luboš; Vrabec, Michal

2015-06-01

A task very often met in in practice is computation of confidence interval bounds for the relative frequency within sampling without replacement. A typical situation includes preelection estimates and similar tasks. In other words, we build the confidence interval for the parameter value M in the parent population of size N on the basis of a random sample of size n. There are many ways to build this interval. We can use a normal or binomial approximation. More accurate values can be looked up in tables. We consider one more method, based on MS Excel calculations. In our paper we compare these different methods for specific values of M and we discuss when the considered methods are suitable. The aim of the article is not a publication of new theoretical methods. This article aims to show that there is a very simple way how to compute the confidence interval bounds without approximations, without tables and without other software costs.

16. Spin-unrestricted random-phase approximation with range separation: Benchmark on atomization energies and reaction barrier heights

SciTech Connect

Mussard, Bastien; Reinhardt, Peter; Toulouse, Julien; Ángyán, János G.

2015-04-21

We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Szabo and Ostlund [J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism, provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse et al., J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.

17. Validity of the relativistic impulse approximation for elastic proton-nucleus scattering at energies lower than 200 MeV

SciTech Connect

Li, Z. P.; Hillhouse, G. C.; Meng, J.

2008-07-15

We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.

18. Computing the band structure and energy gap of penta-graphene by using DFT and G0W0 approximations

Einollahzadeh, H.; Dariani, R. S.; Fazeli, S. M.

2016-03-01

In this paper, we consider the optimum coordinate of the penta-graphene. Penta-graphene is a new stable carbon allotrope which is stronger than graphene. Here, we compare the band gap of penta-graphene with various density functional theory (DFT) methods. We plot the band structure of penta-graphene which calculated with the generalized gradient approximation functional HTCH407, about Fermi energy. Then, one-shot GW (G0W0) correction for precise computations of band structure is applied. Quasi-direct band gap of penta-graphene is obtained around 4.1-4.3 eV by G0W0 correction. Penta-graphene is an insulator and can be expected to have broad applications in future, especially in nanoelectronics and nanomechanics.

19. Enforcing the linear behavior of the total energy with hybrid functionals: Implications for charge transfer, interaction energies, and the random-phase approximation

Atalla, Viktor; Zhang, Igor Ying; Hofmann, Oliver T.; Ren, Xinguo; Rinke, Patrick; Scheffler, Matthias

2016-07-01

We obtain the exchange parameter of hybrid functionals by imposing the fundamental condition of a piecewise linear total energy with respect to electron number. For the Perdew-Burke-Ernzerhof (PBE) hybrid family of exchange-correlation functionals (i.e., for an approximate generalized Kohn-Sham theory) this implies that (i) the highest occupied molecular orbital corresponds to the ionization potential (I ), (ii) the energy of the lowest unoccupied molecular orbital corresponds to the electron affinity (A ), and (iii) the energies of the frontier orbitals are constant as a function of their occupation. In agreement with a previous study [N. Sai et al., Phys. Rev. Lett. 106, 226403 (2011), 10.1103/PhysRevLett.106.226403], we find that these conditions are met for high values of the exact exchange admixture α and illustrate their importance for the tetrathiafulvalene-tetracyanoquinodimethane complex for which standard density functional theory functionals predict artificial electron transfer. We further assess the performance for atomization energies and weak interaction energies. We find that atomization energies are significantly underestimated compared to PBE or PBE0, whereas the description of weak interaction energies improves significantly if a 1 /R6 van der Waals correction scheme is employed.

20. Electron-Phonon Coupling and Energy Flow in a Simple Metal beyond the Two-Temperature Approximation

Waldecker, Lutz; Bertoni, Roman; Ernstorfer, Ralph; Vorberger, Jan

2016-04-01

The electron-phonon coupling and the corresponding energy exchange are investigated experimentally and by ab initio theory in nonequilibrium states of the free-electron metal aluminium. The temporal evolution of the atomic mean-squared displacement in laser-excited thin freestanding films is monitored by femtosecond electron diffraction. The electron-phonon coupling strength is obtained for a range of electronic and lattice temperatures from density functional theory molecular dynamics simulations. The electron-phonon coupling parameter extracted from the experimental data in the framework of a two-temperature model (TTM) deviates significantly from the ab initio values. We introduce a nonthermal lattice model (NLM) for describing nonthermal phonon distributions as a sum of thermal distributions of the three phonon branches. The contributions of individual phonon branches to the electron-phonon coupling are considered independently and found to be dominated by longitudinal acoustic phonons. Using all material parameters from first-principles calculations except the phonon-phonon coupling strength, the prediction of the energy transfer from electrons to phonons by the NLM is in excellent agreement with time-resolved diffraction data. Our results suggest that the TTM is insufficient for describing the microscopic energy flow even for simple metals like aluminium and that the determination of the electron-phonon coupling constant from time-resolved experiments by means of the TTM leads to incorrect values. In contrast, the NLM describing transient phonon populations by three parameters appears to be a sufficient model for quantitatively describing electron-lattice equilibration in aluminium. We discuss the general applicability of the NLM and provide a criterion for the suitability of the two-temperature approximation for other metals.

1. The use of two-stream approximations for the parameterization of solar radiative energy fluxes through vegetation

SciTech Connect

Joseph, J.H.; Iaquinta, J.; Pinty, B.

1996-10-01

Two-stream approximations have been used widely and for a long time in the field of radiative transfer through vegetation in various contexts and in the last 10 years also to model the hemispheric reflectance of vegetated surfaces in numerical models of the earth-atmosphere system. For a plane-parallel and turbid vegetation medium, the existence of rotational invariance allows the application of a conventional two-stream approximation to the phase function, based on an expansion in Legendre Polynomials. Three conditions have to be fulfilled to make this reduction possible in the case of vegetation. The scattering function of single leaves must be bi-Lambertian, the azimuthal distribution of leaf normals must be uniform, and the azimuthally averaged Leaf Area Normal Distribution (LAND) must be either uniform or planophile. The first and second assumptions have been shown to be acceptable by other researchers and, in fact, are usually assumed explicitly or implicitly when dealing with radiative transfer through canopies. The third one, on the shape of the azimuthally averaged LAND, although investigated before, is subjected to a detailed sensitivity test in this study, using a set of synthetic LANDs as well as experimental data for 17 plant canopies. It is shown that the radiative energy flux equations are relatively insensitive to the exact form of the LAND. The experimental Ross functions and hemispheric reflectances lie between those for the synthetic cases of planophile and erectophile LANDs. However, only the uniform and planophile LANDS lead to canopy hemispheric reflectances, which are markedly different from one another. The analytical two-stream solutions for the either the planophile or the uniform LAND cases may be used to model the radiative fluxes through plant canopies in the solar spectral range. The choice between the two for any particular case must be made on the basis of experimental data. 30 refs., 5 figs.

2. The Use of Two-Stream Approximations for the Parameterization of Solar Radiative Energy Fluxes through Vegetation.

Josepoh, Joachim H.; Laquinta, Jean; Pinty, Bernard

1996-10-01

Two-stream approximations have been used widely and for a long time in the field of radiative transfer through vegetation in various contexts and in the last 10 years also to model the hemispheric reflectance of vegetated surfaces in numerical models of the earth-atmosphere system.For a plane-parallel and turbid vegetation medium, the existence of rotational invariance allows the application of a conventional two-stream approximation to the phase function, based on an expansion in Legendre Polynomials. Three conditions have to be fulfilled to nuke this reduction possible in the case of vegetation. The scattering function of single leaves must be bi-Lambertian, the azimuthal distribution of leaf normals must be uniform, and the azimuthally averaged Leaf Area Normal Distribution (LAND) must be either uniform or planophile. The first and second assumptions have been shown to he acceptable by other researchers and. in fact, are usually assumed explicitly or implicitly when dealing with radiative transfer through canopies. The third one, on the shape of the azimuthally averaged LAND, although investigated before, is subjected to a detailed sensitivity test in this study, using a set of synthetic LAND's as well as experimental data for 17 plant canopies.It is shown that the radiative energy flux equations are relatively insensitive to the exact form of the LAND. The experimental Ross functions and hemispheric reflectances lie between those for the synthetic cases of planophile and erectophile LANDS. However, only the uniform and planophile LANDs lead to canopy hemispheric reflectances, which are markedly different from one another.The analytical two-stream solutions for the either the planophile or the uniform LAND cases may be used to model the radiative fluxes through plant canopies in the solar spectral range. The choice between the two for any particular case must he made on the basis of experimental data.

3. Second-Order Approximate Symmetries of the Geodesic Equations for the Reissner-Nordström Metric and Re-Scaling of Energy of a Test Particle

Hussain, Ibrar; Mahomed, Fazal M.; Qadir, Asghar

2007-12-01

Following the use of approximate symmetries for the Schwarzschild spacetime by A.H. Kara, F.M. Mahomed and A. Qadir (Nonlinear Dynam., to appear), we have investigated the exact and approximate symmetries of the system of geodesic equations for the Reissner-Nordström spacetime (RN). For this purpose we are forced to use second order approximate symmetries. It is shown that in the second-order approximation, energy must be rescaled for the RN metric. The implications of this rescaling are discussed.

4. Theory of strongly correlated electron systems. I. Intersite Coulomb interaction and the approximation of renormalized fermions in total energy calculations

Sandalov, I.; Lundin, U.; Eriksson, O.

The diagrammatic strong-coupling perturbation theory (SCPT) for correlated electron systems is developed for intersite Coulomb interaction and for a nonorthogonal basis set. The construction is based on iterations of exact closed equations for many-electron Green functions (GFs) for Hubbard operators in terms of functional derivatives with respect to external sources. The graphs, which do not contain the contributions from the fluctuations of the local population numbers of the ion states, play a special role: a one-to-one correspondence is found between the subset of such graphs for the many-electron GFs and the complete set of Feynman graphs of weak-coupling perturbation theory (WCPT) for single-electron GFs. This fact is used for formulation of the approximation of renormalized Fermions (ARF) in which the many-electron quasi-particles behave analogously to normal Fermions. Then, by analyzing: (a) Sham's equation, which connects the self-energy and the exchange-correlation potential in density functional theory (DFT); and (b) the Galitskii and Migdal expressions for the total energy, written within WCPT and within ARF SCPT, a way we suggest a method to improve the description of the systems with correlated electrons within the local density approximation (LDA) to DFT. The formulation, in terms of renormalized Fermions LDA (RF LDA), is obtained by introducing the spectral weights of the many-electron GFs into the definitions of the charge density, the overlap matrices, effective mixing and hopping matrix elements, into existing electronic structure codes, whereas the weights themselves have to be found from an additional set of equations. Compared with LDA+U and self-interaction correction (SIC) methods, RF LDA has the advantage of taking into account the transfer of spectral weights, and, when formulated in terms of GFs, also allows for consideration of excitations and nonzero temperature. Going beyond the ARF SCPT, as well as RF LDA, and taking into account the

5. Forward dijets in high-energy collisions: Evolution of QCD n-point functions beyond the dipole approximation

SciTech Connect

2010-10-01

Present knowledge of QCD n-point functions of Wilson lines at high energies is rather limited. In practical applications, it is therefore customary to factorize higher n-point functions into products of two-point functions (dipoles) which satisfy the Balitsky-Kovchegov-evolution equation. We employ the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner formalism to derive explicit evolution equations for the 4- and 6-point functions of fundamental Wilson lines and show that if the Gaussian approximation is carried out before the rapidity evolution step is taken, then many leading order N{sub c} contributions are missed. Our evolution equations could specifically be used to improve calculations of forward dijet angular correlations, recently measured by the STAR Collaboration in deuteron-gold collisions at the RHIC collider. Forward dijets in proton-proton collisions at the LHC probe QCD evolution at even smaller light-cone momentum fractions. Such correlations may provide insight into genuine differences between the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner and Balitsky-Kovchegov approaches.

6. Polarization corrections to single-particle energies studied within the energy-density-functional and quasiparticle random-phase approximation approaches

Tarpanov, D.; Toivanen, J.; Dobaczewski, J.; Carlsson, B. G.

2014-01-01

Background: Models based on using perturbative polarization corrections and mean-field blocking approximation give conflicting results for masses of odd nuclei. Purpose: We systematically investigate the polarization and mean-field models, implemented within self-consistent approaches that use identical interactions and model spaces, to find reasons for the conflicts between them. Methods: For density-dependent interactions and with pairing correlations included, we derive and study links between the mean-field and polarization results obtained for energies of odd nuclei. We also identify and discuss differences between the polarization-correction and full particle-vibration-coupling (PVC) models. Numerical calculations are performed for the mean-field ground-state properties of deformed odd nuclei and then compared to the polarization corrections determined using the approach that conserves spherical symmetry. Results: We have identified and numerically evaluated self-interaction (SI) energies that are at the origin of different results obtained within the mean-field and polarization-correction approaches. Conclusions: Mean-field energies of odd nuclei are polluted by the SI energies, and this makes them different from those obtained using polarization-correction methods. A comparison of both approaches allows for the identification and determination of the SI terms, which then can be calculated and removed from the mean-field results, giving the self-interaction-free energies. The simplest deformed mean-field approach that does not break parity symmetry is unable to reproduce full PVC effects.

7. Bounds on the overlap of the Hartree-Fock, optimized effective potential, and density functional approximations with the exact energy eigenstates.

PubMed

Thanos, S; Theophilou, A K

2006-05-28

In this paper, we examine the limits of accuracy of the single determinant approximations (Hartree-Fock, optimized effective potential, and density functional theory) to the exact energy eigenstates of many electron systems. We show that an approximate Slater determinant of S(z)=M gives maximum accuracy for states with S=M, provided that perturbation theory for the spin up minus spin down potential is applicable. The overlap with the exact energy eigenstates with S not equal M is much smaller. Therefore, for the case that the emphasis is on wave functions, one must use symmetry preserving theories, although this is at the expense of accuracy in energy. PMID:16774321

8. Estimating the Gibbs energy of hydration from molecular dynamics trajectories obtained by integral equations of the theory of liquids in the RISM approximation

Tikhonov, D. A.; Sobolev, E. V.

2011-04-01

A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.

9. Discussion on the energy content of the galactic dark matter Bose-Einstein condensate halo in the Thomas-Fermi approximation

SciTech Connect

De Souza, J.C.C.; Pires, M.O.C. E-mail: marcelo.pires@ufabc.edu.br

2014-03-01

We show that the galactic dark matter halo, considered composed of an axionlike particles Bose-Einstein condensate [6] trapped by a self-graviting potential [5], may be stable in the Thomas-Fermi approximation since appropriate choices for the dark matter particle mass and scattering length are made. The demonstration is performed by means of the calculation of the potential, kinetic and self-interaction energy terms of a galactic halo described by a Boehmer-Harko density profile. We discuss the validity of the Thomas-Fermi approximation for the halo system, and show that the kinetic energy contribution is indeed negligible.

10. Shortening the retention interval of 24-hour dietary recalls increases fourth-grade children’s accuracy for reporting energy and macronutrient intake at school meals

PubMed Central

Guinn, Caroline H.; Royer, Julie A.; Hardin, James W.; Mackelprang, Alyssa J.; Smith, Albert F.

2010-01-01

Background Accurate information about children’s intake is crucial for national nutrition policy and for research and clinical activities. To analyze accuracy for reporting energy and nutrients, most validation studies utilize the conventional approach which was not designed to capture errors of reported foods and amounts. The reporting-error-sensitive approach captures errors of reported foods and amounts. Objective To extend results to energy and macronutrients for a validation study concerning retention interval (elapsed time between to-be-reported meals and the interview) and accuracy for reporting school-meal intake, the conventional and reporting-error-sensitive approaches were compared. Design and participants/setting Fourth-grade children (n=374) were observed eating two school meals, and interviewed to obtain a 24-hour recall using one of six interview conditions from crossing two target periods (prior-24-hours; previous-day) with three interview times (morning; afternoon; evening). Data were collected in one district during three school years (2004–2005; 2005–2006; 2006–2007). Main outcome measures Report rates (reported/observed), correspondence rates (correctly reported/observed), and inflation ratios (intruded/observed) were calculated for energy and macronutrients. Statistical analyses performed For each outcome measure, mixed-model analysis of variance was conducted with target period, interview time, their interaction, and sex in the model; results were adjusted for school year and interviewer. Results Conventional approach — Report rates for energy and macronutrients did not differ by target period, interview time, their interaction, or sex. Reporting-error-sensitive approach — Correspondence rates for energy and macronutrients differed by target period (four P-values<0.0001) and the target-period by interview-time interaction (four P-values<0.0001); inflation ratios for energy and macronutrients differed by target period (four P

11. Dual quantum electrodynamics: Dyon-dyon and charge-monopole scattering in a high-energy approximation

SciTech Connect

Gamberg, Leonard; Milton, Kimball A.

2000-04-01

We develop the quantum field theory of electron-point magnetic monopole interactions and, more generally, dyon-dyon interactions, based on the original string-dependent ''nonlocal'' action of Dirac and Schwinger. We demonstrate that a viable nonperturbative quantum field theoretic formulation can be constructed that results in a string independent cross section for monopole-electron and dyon-dyon scattering. Such calculations can be done only by using nonperturbative approximations such as the eikonal approximation and not by some mutilation of lowest-order perturbation theory. (c) 2000 The American Physical Society.

12. Dual quantum electrodynamics: Dyon-dyon and charge-monopole scattering in a high-energy approximation

Gamberg, Leonard; Milton, Kimball A.

2000-04-01

We develop the quantum field theory of electron-point magnetic monopole interactions and, more generally, dyon-dyon interactions, based on the original string-dependent nonlocal'' action of Dirac and Schwinger. We demonstrate that a viable nonperturbative quantum field theoretic formulation can be constructed that results in a string independent cross section for monopole-electron and dyon-dyon scattering. Such calculations can be done only by using nonperturbative approximations such as the eikonal approximation and not by some mutilation of lowest-order perturbation theory.

13. /sup 187/Os + n resonance parameters in the interval 27-500 eV neutron energies

SciTech Connect

Winters, R.R.; Carlton, R.F.; Harvey, J.A.; Hill, N.W.

1982-01-01

The neutron total cross section for /sup 187/Os, in the energy range, 27 eV to 500 eV, has been measured at the ORELA facility by the neutron time-of-flight technique, utilizing a 2.0 gm osmium sample (n = 0.008401 Os-nuclei/barn) enriched to 70.38% /sup 187/Os. Measurements were performed at a 80 m flight station with an energy resolution, ..delta..E/E, of 0.1% using a /sup 6/Li glass scintillator. Resolved resonances have been analyzed by a Reich-Moore multilevel code (SAMMY) to obtain parameters for 85 resonances up to 500 eV. Preliminary determinations of the level spacing (5 eV) and s-wave strength function (3.9 x 10/sup -4/) for /sup 187/Os are in agreement with recent analyses of the osmium isotopes, made in connection with the use of the Re/Os chronometer for estimating the duration of stellar nucleosynthesis.

14. Calculation of the Energy-Band Structure of the Kronig-Penney Model Using the Nearly-Free and Tightly-Bound-Electron Approximations

ERIC Educational Resources Information Center

Wetsel, Grover C., Jr.

1978-01-01

Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)

15. Photofission Cross Sections for {sup 237}Np in the Energy Interval from 5.27 to 10.83 MeV

SciTech Connect

Geraldo, L.P.; Semmler, R.; Goncalez, O. L.; Arruda-Neto, J.D.T.; Garcia, F.; Rodriguez, O.

2000-11-15

Photofission cross sections for {sup 237}Np have been measured as a function of energy, in the interval from 5.27 to 10.83 MeV. The gamma-ray spectra were those produced by thermal neutron capture, in 30 different target materials, at a tangential beam hole of the Instituto de Pesquisas Energeticas e Nucleares IEA-R1 2-MW research reactor. The set of experimental data has been unfolded employing least-squares methods and the covariance matrix methodology. The determined photofission cross sections for {sup 237}Np, together with the complete correlation matrix for the involved errors, are presented and are compared with previous measurements reported in the literature. A statistical calculation for the {sup 237}Np photofission cross sections was performed, and the results are compared with the experimental data.

16. Neutrinos of energy approximately 10(16) eV from gamma-ray bursts in pulsar wind bubbles.

PubMed

Guetta, Dafne; Granot, Jonathan

2003-05-23

The supranova model for gamma-ray bursts (GRBs) is becoming increasingly more popular. In this scenario the GRB occurs weeks to years after a supernova explosion, and is located inside a pulsar wind bubble (PWB). Protons accelerated in the internal shocks that emit the GRB may interact with the external PWB photons producing pions which decay into approximately 10(16) eV neutrinos. A km(2) neutrino detector would observe several events per year correlated with the GRBs. PMID:12785881

17. Calculation of delayed-neutron energy spectra in a quasiparticle random-phase approximation-Hauser-Feshbach model

SciTech Connect

Kawano, T.; Moeller, P.; Wilson, W. B.

2008-11-15

Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.

18. On the errors of local density (LDA) and generalized gradient (GGA) approximations to the Kohn-Sham potential and orbital energies.

PubMed

Gritsenko, O V; Mentel, Ł M; Baerends, E J

2016-05-28

In spite of the high quality of exchange-correlation energies Exc obtained with the generalized gradient approximations (GGAs) of density functional theory, their xc potentials vxc are strongly deficient, yielding upshifts of ca. 5 eV in the orbital energy spectrum (in the order of 50% of high-lying valence orbital energies). The GGAs share this deficiency with the local density approximation (LDA). We argue that this error is not caused by the incorrect long-range asymptotics of vxc or by self-interaction error. It arises from incorrect density dependencies of LDA and GGA exchange functionals leading to incorrect (too repulsive) functional derivatives (i.e., response parts of the potentials). The vxc potential is partitioned into the potential of the xc hole vxchole (twice the xc energy density ϵxc), which determines Exc, and the response potential vresp, which does not contribute to Exc explicitly. The substantial upshift of LDA/GGA orbital energies is due to a too repulsive LDA exchange response potential vxresp (LDA) in the bulk region. Retaining the LDA exchange hole potential plus the B88 gradient correction to it but replacing the response parts of these potentials by the model orbital-dependent response potential vxresp (GLLB) of Gritsenko et al. [Phys. Rev. A 51, 1944 (1995)], which has the proper step-wise form, improves the orbital energies by more than an order of magnitude. Examples are given for the prototype molecules: dihydrogen, dinitrogen, carbon monoxide, ethylene, formaldehyde, and formic acid. PMID:27250286

19. Ensemble v-representable ab initio density-functional calculation of energy and spin in atoms: A test of exchange-correlation approximations

SciTech Connect

Kraisler, Eli; Makov, Guy; Kelson, Itzhak

2010-10-15

The total energies and the spin states for atoms and their first ions with Z=1-86 are calculated within the the local spin-density approximation (LSDA) and the generalized-gradient approximation (GGA) to the exchange-correlation (xc) energy in density-functional theory. Atoms and ions for which the ground-state density is not pure-state v-representable are treated as ensemble v-representable with fractional occupations of the Kohn-Sham system. A recently developed algorithm which searches over ensemble v-representable densities [E. Kraisler et al., Phys. Rev. A 80, 032115 (2009)] is employed in calculations. It is found that for many atoms, the ionization energies obtained with the GGA are only modestly improved with respect to experimental data, as compared to the LSDA. However, even in those groups of atoms where the improvement is systematic, there remains a non-negligible difference with respect to the experiment. The ab initio electronic configuration in the Kohn-Sham reference system does not always equal the configuration obtained from the spectroscopic term within the independent-electron approximation. It was shown that use of the latter configuration can prevent the energy-minimization process from converging to the global minimum, e.g., in lanthanides. The spin values calculated ab initio fit the experiment for most atoms and are almost unaffected by the choice of the xc functional. Among the systems with incorrectly obtained spin, there exist some cases (e.g., V, Pt) for which the result is found to be stable with respect to small variations in the xc approximation. These findings suggest a necessity for a significant modification of the exchange-correlation functional, probably of a nonlocal nature, to accurately describe such systems.

20. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation

2016-05-01

The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.

1. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation.

PubMed

2016-05-21

The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation. PMID:27208944

2. Convergent sum of gradient expansion of the kinetic-energy density functional up to the sixth order term using Padé approximant

Sergeev, A.; Alharbi, F. H.; Jovanovic, R.; Kais, S.

2016-04-01

The gradient expansion of the kinetic energy density functional, when applied to atoms or finite systems, usually grossly overestimates the energy in the fourth order and generally diverges in the sixth order. We avoid the divergence of the integral by replacing the asymptotic series including the sixth order term in the integrand by a rational function. Padé approximants show moderate improvements in accuracy in comparison with partial sums of the series. The results are discussed for atoms and Hooke’s law model for two-electron atoms.

3. Programming with Intervals

Matsakis, Nicholas D.; Gross, Thomas R.

Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

4. Few-particles generation channels in inelastic hadron-nuclear interactions at energy approximately equals 400 GeV

NASA Technical Reports Server (NTRS)

Tsomaya, P. V.

1985-01-01

The behavior of the few-particles generation channels in interaction of hadrons with nuclei of CH2, Al, Cu and Pb at mean energy 400 GeV was investigated. The values of coherent production cross-sections beta coh at the investigated nuclei are given. A dependence of coherent and noncoherent events is investigated. The results are compared with the simulations on additive quark model (AQM).

5. Analytic energy gradients for the coupled-cluster singles and doubles method with the density-fitting approximation.

PubMed

Bozkaya, Uğur; Sherrill, C David

2016-05-01

An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the "gradient terms": computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbital (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C10H22), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies. PMID:27155621

6. Analytic energy gradients for the coupled-cluster singles and doubles method with the density-fitting approximation

Bozkaya, Uǧur; Sherrill, C. David

2016-05-01

An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the "gradient terms": computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbital (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C10H22), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.

7. Piecewise linear approximation for hereditary control problems

NASA Technical Reports Server (NTRS)

Propst, Georg

1990-01-01

This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

8. Efficient implementation of the analytic second derivatives of Hartree-Fock and hybrid DFT energies: a detailed analysis of different approximations

Bykov, Dmytro; Petrenko, Taras; Izsák, Róbert; Kossmann, Simone; Becker, Ute; Valeev, Edward; Neese, Frank

2015-07-01

In this paper, various implementations of the analytic Hartree-Fock and hybrid density functional energy second derivatives are studied. An approximation-free four-centre implementation is presented, and its accuracy is rigorously analysed in terms of self-consistent field (SCF), coupled-perturbed SCF (CP-SCF) convergence and prescreening criteria. The CP-SCF residual norm convergence threshold turns out to be the most important of these. Final choices of convergence thresholds are made such that an accuracy of the vibrational frequencies of better than 5 cm-1 compared to the numerical noise-free results is obtained, even for the highly sensitive low frequencies (<100-200 cm-1). The effects of the choice of numerical grid for density functional exchange-correlation integrations are studied and various weight derivative schemes are analysed in detail. In the second step of the work, approximations are introduced in order to speed up the computation without compromising its accuracy. To this end, the accuracy and efficiency of the resolution of identity approximation for the Coulomb terms and the semi-numerical chain of spheres approximation to the exchange terms are carefully analysed. It is shown that the largest performance improvements are realised if either Hartree-Fock exchange is absent (pure density functionals) and otherwise, if the exchange terms in the CP-SCF step of the calculation are approximated by the COSX method in conjunction with a small integration grid. Default values for all the involved truncation parameters are suggested. For vancomycine (176 atoms and 3593 basis functions), the RIJCOSX Hessian calculation with the B3LYP functional and the def2-TZVP basis set takes ∼3 days using 16 Intel® Xeon® 2.60GHz processors with the COSX algorithm having a net parallelisation scaling of 11.9 which is at least ∼20 times faster than the calculation without the RIJCOSX approximation.

9. The Vertical-current Approximation Nonlinear Force-free Field Code—Description, Performance Tests, and Measurements of Magnetic Energies Dissipated in Solar Flares

Aschwanden, Markus J.

2016-06-01

In this work we provide an updated description of the Vertical-Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code, which is designed to measure the evolution of the potential, non-potential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann, we find agreement in the potential, non-potential, and free energy within a factor of ≲ 1.3, but the Wiegelmann code yields in the average a factor of 2 lower flare energies. The VCA-NLFFF code is found to detect decreases in flare energies in most X, M, and C-class flares. The successful detection of energy decreases during a variety of flares with the VCA-NLFFF code indicates that current-driven twisting and untwisting of the magnetic field is an adequate model to quantify the storage of magnetic energies in active regions and their dissipation during flares. The VCA-NLFFF code is also publicly available in the Solar SoftWare.

10. Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

Lu, Dan; Ye, Ming; Hill, Mary C.

2012-09-01

Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for

11. On the errors of local density (LDA) and generalized gradient (GGA) approximations to the Kohn-Sham potential and orbital energies

Gritsenko, O. V.; Mentel, Ł. M.; Baerends, E. J.

2016-05-01

In spite of the high quality of exchange-correlation energies Exc obtained with the generalized gradient approximations (GGAs) of density functional theory, their xc potentials vxc are strongly deficient, yielding upshifts of ca. 5 eV in the orbital energy spectrum (in the order of 50% of high-lying valence orbital energies). The GGAs share this deficiency with the local density approximation (LDA). We argue that this error is not caused by the incorrect long-range asymptotics of vxc or by self-interaction error. It arises from incorrect density dependencies of LDA and GGA exchange functionals leading to incorrect (too repulsive) functional derivatives (i.e., response parts of the potentials). The vxc potential is partitioned into the potential of the xc hole vxchole (twice the xc energy density ɛxc), which determines Exc, and the response potential vresp, which does not contribute to Exc explicitly. The substantial upshift of LDA/GGA orbital energies is due to a too repulsive LDA exchange response potential vxresp L D A in the bulk region. Retaining the LDA exchange hole potential plus the B88 gradient correction to it but replacing the response parts of these potentials by the model orbital-dependent response potential vxresp G L L B of Gritsenko et al. [Phys. Rev. A 51, 1944 (1995)], which has the proper step-wise form, improves the orbital energies by more than an order of magnitude. Examples are given for the prototype molecules: dihydrogen, dinitrogen, carbon monoxide, ethylene, formaldehyde, and formic acid.

12. Interval polynomial positivity

NASA Technical Reports Server (NTRS)

Bose, N. K.; Kim, K. D.

1989-01-01

It is shown that a univariate interval polynomial is globally positive if and only if two extreme polynomials are globally positive. It is shown that the global positivity property of a bivariate interval polynomial is completely determined by four extreme bivariate polynomials. The cardinality of the determining set for k-variate interval polynomials is 2k. One of many possible generalizations, where vertex implication for global positivity holds, is made by considering the parameter space to be the set dual of a boxed domain.

13. GW approximation study of late transition metal oxides: Spectral function clusters around Fermi energy as the mechanism behind smearing in momentum density

Khidzir, S. M.; Ibrahim, K. N.; Wan Abdullah, W. A. T.

2016-05-01

Momentum density studies are the key tool in Fermiology in which electronic structure calculations have proven to be the integral underlying methodology. Agreements between experimental techniques such as Compton scattering experiments and conventional density functional calculations for late transition metal oxides (TMOs) prove elusive. In this work, we report improved momentum densities of late TMOs using the GW approximation (GWA) which appears to smear the momentum density creating occupancy above the Fermi break. The smearing is found to be largest for NiO and we will show that it is due to more spectra surrounding the NiO Fermi energy compared to the spectra around the Fermi energies of FeO and CoO. This highlights the importance of the positioning of the Fermi energy and the role played by the self-energy term to broaden the spectra and we elaborate on this point by comparing the GWA momentum densities to their LDA counterparts and conclude that the larger difference at the intermediate level shows that the self-energy has its largest effect in this region. We finally analyzed the quasiparticle renormalization factor and conclude that an increase of electrons in the d-orbital from FeO to NiO plays a vital role in changing the magnitude of electron correlation via the self-energy.

14. Volumetric measurements of bone mineral density of the lumbar spine: comparison of three geometrical approximations using dual-energy X-ray absorptiometry (DXA)

PubMed

Schreuder, M F; van Driel, A P; van Lingen, A; Roos, J C; de Ridder, C M; Manoliu, R A; David, E F; Netelenbos, J C

1998-08-01

Measurements of bone mineral density using dual-energy X-ray absorptiometry (DXA) gives area values (g cm-2) rather than true volumetric values (g cm-3). To calculate the vertebral volume using planar postero-anterior and lateral DXA values, several different geometrical approximations were used: cubic, cylindrical with a circular cross-section and cylindrical with an elliptical cross-section. The aim of this study was to compare these geometrical approximations with each other and with a reference standard, defined as the volume found on a computed tomographic (CT) scan. L2 and L3 were evaluated in a phantom study. Volume approximations by the cube or cylinder with circular cross-section geometry showed more than a 50% overestimation (range 54-74%). However, the elliptical cylinder approach showed very good agreement: 2.1% and 1.2% for L2 and L3, respectively, when compared to the CT volumes. In addition, we performed four patient studies with both CT and DXA to evaluate the elliptical cylinder estimate in a clinical setting. For L2 and L3, the mean relative difference was less than 2%. We conclude that the elliptical cylinder approach results in the most accurate bone volume estimates in both the phantom and patients. PMID:9751926

15. Reaction dynamics of D+H2 --> DH+H: Effects of potential energy surface topography and usefulness of the constant centrifugal potential approximation

Takada, Shoji; Ohsaki, Akihiko; Nakamura, Hiroki

1992-01-01

Two findings are reported for the D+H2→DH+H reaction on the basis of the exact quantum mechanical calculation for J=0, where J is total angular momentum. First, with use of the Liu-Siegbahn-Truhlar-Horowitz (LSTH) surface and the Varandas surface, we demonstrate that a rather small difference in potential energy surface (PES) induces a surprisingly large effect on reaction dynamics. Two origins of the discrepancy are pointed out and analyzed: (1) Noncollinear conformation in the reaction zone contributes to the reaction significantly despite the fact that the minimum energy path and the saddle point are located in the collinear configuration. (2) A difference in the distant part of PES also causes a discrepancy in the reaction dynamics indirectly, although this effect is much smaller than (1). Secondly, we investigate the validity of the constant centrifugal potential approximation (CCPA) based on the accurate results for J=0. The use of CCPA to estimate total cross section and rate constant is again proved to have practical utility as in the cases of the sudden and adiabatic approximations.

16. Proper Interval Vertex Deletion

Villanger, Yngve

Deleting a minimum number of vertices from a graph to obtain a proper interval graph is an NP-complete problem. At WG 2010 van Bevern et al. gave an O((14k + 14) k + 1 kn 6) time algorithm by combining iterative compression, branching, and a greedy algorithm. We show that there exists a simple greedy O(n + m) time algorithm that solves the Proper Interval Vertex Deletion problem on \\{claw,net,allowbreak tent,allowbreak C_4,C_5,C_6\\}-free graphs. Combining this with branching on the forbidden structures claw,net,tent,allowbreak C_4,C_5, and C 6 enables us to get an O(kn 6 6 k ) time algorithm for Proper Interval Vertex Deletion, where k is the number of deleted vertices.

17. Pressure-induced phase transformations in alkali-metal hydrides calculated using an improved linear-muffin-tin-orbital-atomic-sphere-approximation energy scheme

Rodriguez, C. O.; Methfessel, M.

1992-01-01

A scheme for the calculation of total energies from first principles is described which is intermediate between the popular linear muffin-tin-orbital method in the atomic-sphere approximation (LMTO-ASA) and an exact full-potential treatment. The local-density total energy is evaluated accurately for the output charge density from the ASA potential. This method is applied to the study of static structural properties and the pressure-induced phase transformation from B1 (NaCl-structure) to B2 (CsCl-structure) phases for the partially ionic alkaki-metal hydrides NaH and KH and the alkali halide NaCl. Good agreement with experimental transition pressures and volumes is obtained. The series NaH, KH, and NaCl shows the observed strong cation and weak anion dependence. Charge densities and band structures are given at zero and high pressure. Calculated energy-volume curves for LiH show no transition up to 1 Mbar, in agreement with experimental data.

18. A model for the bandgap energy of the dilute nitride InGaNAs alloys by modifying simplified coherent potential approximation

Zhao, Chuan-Zhen; Qu, You-Yang; Wei, Tong; Sun, Xiao-Dong; Wang, Sha-Sha; Lu, Ke-Qing

2014-03-01

In this paper, a model describing the bandgap energy of the dilute nitride alloy InxGa1-xNyAs1-y is developed based on the modification of simplified coherent potential approximation (MSCPA) and the band anti-crossing model (BAC). The parameters in the model are obtained by fitting the experimental bandgap energies of the ternary alloys InGaAs, InGaN, GaNAs and InNAs. It is found that the results agree well with the experimental data. We also find that although the bandgap energies of InxGa1-xNyAs1-y and InxGa1-xAs can be calculated by using MSCPA, the physical mechanisms for the bandgap evolution of InxGa1-xNyAs1-y and InxGa1-xAs are very different. In addition, it is found that the model in this work may be used in a larger composition range than the BAC model.

19. Efficient modal-expansion discrete-dipole approximation: Application to the simulation of optical extinction and electron energy-loss spectroscopies

Guillaume, Stéphane-Olivier; de Abajo, F. Javier García; Henrard, Luc

2013-12-01

An efficient procedure is introduced for the calculation of the optical response of individual and coupled metallic nanoparticles in the framework of the discrete-dipole approximation (DDA). We introduce a modal expansion in the basis set of discrete dipoles and show that a few suitably selected modes are sufficient to compute optical spectra with reasonable accuracy, thus reducing the required numerical effort relative to other DDA approaches. Our method offers a natural framework for the study of localized plasmon modes, including plasmon hybridization. As a proof of concept, we investigate optical extinction and electron energy-loss spectra of monomers, dimers, and quadrumers formed by flat silver squares. This method should find application to the previously prohibited simulation of complex particle arrays.

20. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

PubMed

Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

2013-05-30

This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation. PMID:23627665

1. High resolution time interval counter

DOEpatents

Condreva, Kenneth J.

1994-01-01

A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

2. High resolution time interval counter

DOEpatents

Condreva, K.J.

1994-07-26

A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

3. How accurate is the strongly orthogonal geminal theory in predicting excitation energies? Comparison of the extended random phase approximation and the linear response theory approaches

SciTech Connect

Pernal, Katarzyna; Chatterjee, Koushik; Kowalski, Piotr H.

2014-01-07

Performance of the antisymmetrized product of strongly orthogonal geminal (APSG) ansatz in describing ground states of molecules has been extensively explored in the recent years. Not much is known, however, about possibilities of obtaining excitation energies from methods that would rely on the APSG ansatz. In the paper we investigate the recently proposed extended random phase approximations, ERPA and ERPA2, that employ APSG reduced density matrices. We also propose a time-dependent linear response APSG method (TD-APSG). Its relation to the recently proposed phase including natural orbital theory is elucidated. The methods are applied to Li{sub 2}, BH, H{sub 2}O, and CH{sub 2}O molecules at equilibrium geometries and in the dissociating limits. It is shown that ERPA2 and TD-APSG perform better in describing double excitations than ERPA due to inclusion of the so-called diagonal double elements. Analysis of the potential energy curves of Li{sub 2}, BH, and H{sub 2}O reveals that ERPA2 and TD-APSG describe correctly excitation energies of dissociating molecules if orbitals involved in breaking bonds are involved. For single excitations of molecules at equilibrium geometries the accuracy of the APSG-based methods approaches that of the time-dependent Hartree-Fock method with the increase of the system size. A possibility of improving the accuracy of the TD-APSG method for single excitations by splitting the electron-electron interaction operator into the long- and short-range terms and employing density functionals to treat the latter is presented.

4. How accurate is the strongly orthogonal geminal theory in predicting excitation energies? Comparison of the extended random phase approximation and the linear response theory approaches

Pernal, Katarzyna; Chatterjee, Koushik; Kowalski, Piotr H.

2014-01-01

Performance of the antisymmetrized product of strongly orthogonal geminal (APSG) ansatz in describing ground states of molecules has been extensively explored in the recent years. Not much is known, however, about possibilities of obtaining excitation energies from methods that would rely on the APSG ansatz. In the paper we investigate the recently proposed extended random phase approximations, ERPA and ERPA2, that employ APSG reduced density matrices. We also propose a time-dependent linear response APSG method (TD-APSG). Its relation to the recently proposed phase including natural orbital theory is elucidated. The methods are applied to Li2, BH, H2O, and CH2O molecules at equilibrium geometries and in the dissociating limits. It is shown that ERPA2 and TD-APSG perform better in describing double excitations than ERPA due to inclusion of the so-called diagonal double elements. Analysis of the potential energy curves of Li2, BH, and H2O reveals that ERPA2 and TD-APSG describe correctly excitation energies of dissociating molecules if orbitals involved in breaking bonds are involved. For single excitations of molecules at equilibrium geometries the accuracy of the APSG-based methods approaches that of the time-dependent Hartree-Fock method with the increase of the system size. A possibility of improving the accuracy of the TD-APSG method for single excitations by splitting the electron-electron interaction operator into the long- and short-range terms and employing density functionals to treat the latter is presented.

5. Interval arithmetic operations for uncertainty analysis with correlated interval variables

Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu

2016-08-01

A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

6. Approximate flavor symmetries

SciTech Connect

Rasin, A.

1994-04-01

We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

7. Interval-valued random functions and the kriging of intervals

SciTech Connect

Diamond, P.

1988-04-01

Estimation procedures using data that include some values known to lie within certain intervals are usually regarded as problems of constrained optimization. A different approach is used here. Intervals are treated as elements of a positive cone, obeying the arithmetic of interval analysis, and positive interval-valued random functions are discussed. A kriging formalism for interval-valued data is developed. It provides estimates that are themselves intervals. In this context, the condition that kriging weights be positive is seen to arise in a natural way. A numerical example is given, and the extension to universal kriging is sketched.

8. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

PubMed Central

Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

2015-01-01

Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

9. Low-energy dipole excitations in neon isotopes and N=16 isotones within the quasiparticle random-phase approximation and the Gogny force

SciTech Connect

Martini, M.; Peru, S.; Dupuis, M.

2011-03-15

Low-energy dipole excitations in neon isotopes and N=16 isotones are calculated with a fully consistent axially-symmetric-deformed quasiparticle random phase approximation (QRPA) approach based on Hartree-Fock-Bogolyubov (HFB) states. The same Gogny D1S effective force has been used both in HFB and QRPA calculations. The microscopical structure of these low-lying resonances, as well as the behavior of proton and neutron transition densities, are investigated in order to determine the isoscalar or isovector nature of the excitations. It is found that the N=16 isotones {sup 24}O, {sup 26}Ne, {sup 28}Mg, and {sup 30}Si are characterized by a similar behavior. The occupation of the 2s{sub 1/2} neutron orbit turns out to be crucial, leading to nontrivial transition densities and to small but finite collectivity. Some low-lying dipole excitations of {sup 28}Ne and {sup 30}Ne, characterized by transitions involving the {nu}1d{sub 3/2} state, present a more collective behavior and isoscalar transition densities. A collective proton low-lying excitation is identified in the {sup 18}Ne nucleus.

10. Relativistic regular approximations revisited: An infinite-order relativistic approximation

SciTech Connect

Dyall, K.G.; van Lenthe, E.

1999-07-01

The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}

11. Approximating random quantum optimization problems

Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

2013-06-01

We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

12. Experimenting with musical intervals

Lo Presto, Michael C.

2003-07-01

When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

13. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

Engel, D.; Klews, M.; Wunner, G.

2009-02-01

We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

14. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

Engel, D.; Klews, M.; Wunner, G.

2009-02-01

We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

15. Volatility return intervals analysis of the Japanese market

Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

2008-03-01

We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

16. Approximate spatial reasoning

NASA Technical Reports Server (NTRS)

Dutta, Soumitra

1988-01-01

A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

17. An interval model updating strategy using interval response surface models

Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

2015-08-01

Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

18. Dependence of the specific energy of the β/α interface in the VT6 titanium alloy on the heating temperature in the interval 600-975°C

Murzinova, M. A.; Zherebtsov, S. V.; Salishchev, G. A.

2016-04-01

The specific energy of interphase boundaries is an important characteristic of multiphase alloys, because it determines in many respects their microstructural stability and properties during processing and exploitation. We analyze variation of the specific energy of the β/α interface in the VT6 titanium alloy at temperatures from 600 to 975°C. Analysis is based on the model of a ledge interphase boundary and the method for computation of its energy developed by van der Merwe and Shiflet [33, 34]. Calculations use the available results of measurements of the lattice parameters of phases in the indicated temperature interval and their chemical composition. In addition, we take into account the experimental data and the results of simulation of the effect of temperature and phase composition on the elastic moduli of the α and β phases in titanium alloys. It is shown that when the temperature decreases from 975 to 600°C, the specific energy of the β/α interface increases from 0.15 to 0.24 J/m2. The main contribution to the interfacial energy (about 85%) comes from edge dislocations accommodating the misfit in direction [0001]α || [110]β. The energy associated with the accommodation of the misfit in directions {[ {bar 2110} ]_α }| {{{[ {1bar 11} ]}_β }} . and {[ {0bar 110} ]_α }| {{{[ {bar 112} ]}_β }} . due to the formation of "ledges" and tilt misfit dislocations is low and increases slightly upon cooling.

19. Approximation for nonresonant beam target fusion reactivities

SciTech Connect

Mikkelsen, D.R.

1988-11-01

The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.

20. Intervality and coherence in complex networks.

PubMed

Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A

2016-06-01

Food webs-networks of predators and prey-have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis-usually identified with a "niche" dimension-has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks. PMID:27368797

1. Intervality and coherence in complex networks

Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

2016-06-01

Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

2. Calculator Function Approximation.

ERIC Educational Resources Information Center

Schelin, Charles W.

1983-01-01

The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)

3. Approximate spatial reasoning

NASA Technical Reports Server (NTRS)

Dutta, Soumitra

1988-01-01

Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

4. Approximate kernel competitive learning.

PubMed

Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

2015-03-01

Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

5. Sensory superstition on multiple interval schedules.

PubMed

Starr, B C; Staddon, J E

1982-03-01

Pigeons were exposed to multiple schedules in which an irregular repeating sequence of five stimulus components was correlated with the same reinforcement schedule throughout. Stable, idiosyncratic, response-rate differences developed across components. Components were rank-ordered by response rate; an approximately linear relation was found between rank order and the deviation of mean response rate from the overall mean rate. Nonzero slopes of this line were found for multiple fixed-interval and variable-time schedules and for multiple variable-interval schedules both when number of reinforcements was the same in all components and when it varied. The steepest function slopes were found in the variable schedules with relatively long interfood intervals and relatively short component durations. When just one stimulus was correlated with all components of a multiple variable-interval schedule, the slope of the line was close to zero. The results suggest that food-rate differences may be induced initially by different reactions to the stimuli and subsequently maintained by food. PMID:7069361

6. Sensory superstition on multiple interval schedules.

PubMed Central

Starr, B C; Staddon, J E

1982-01-01

Pigeons were exposed to multiple schedules in which an irregular repeating sequence of five stimulus components was correlated with the same reinforcement schedule throughout. Stable, idiosyncratic, response-rate differences developed across components. Components were rank-ordered by response rate; an approximately linear relation was found between rank order and the deviation of mean response rate from the overall mean rate. Nonzero slopes of this line were found for multiple fixed-interval and variable-time schedules and for multiple variable-interval schedules both when number of reinforcements was the same in all components and when it varied. The steepest function slopes were found in the variable schedules with relatively long interfood intervals and relatively short component durations. When just one stimulus was correlated with all components of a multiple variable-interval schedule, the slope of the line was close to zero. The results suggest that food-rate differences may be induced initially by different reactions to the stimuli and subsequently maintained by food. PMID:7069361

7. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

ERIC Educational Resources Information Center

Thompson, Bruce

2007-01-01

The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

8. Semiphenomenological approximation of the sums of experimental radiative strength functions for dipole gamma transitions of energy E{sub {gamma}}below the neutron binding energy B{sub n} for mass numbers in the range 40 {<=} A {<=} 200

SciTech Connect

Sukhovoj, A. M. Furman, W. I. Khitrov, V. A.

2008-06-15

The sums of radiative strength functions for primary dipole gamma transitions, k(E1) + k(M1), are approximated to a high precision by a superposition of two functional dependences in the energy range 0.5 < E{sub 1} < B{sub n} - 0.5 MeV for the {sup 40}K, {sup 60}Co, {sup 71,74}Ge, {sup 80}Br, {sup 114}Cd, {sup 118}Sn, {sup 124,125}Te, {sup 128}I, {sup 137,138,139}Ba, {sup 140}La, {sup 150}Sm, {sup 156,158}Gd, {sup 160}Tb, {sup 163,164,165}Dy, {sup 166}Ho, {sup 168}Er, {sup 170}Tm, {sup 174}Yb, {sup 176,177}Lu, {sup 181}Hf, {sup 182}Ta, {sup 183,184,185,187}W, {sup 188,190,191,193}Os, {sup 192}Ir, {sup 196}Pt, {sup 198}Au, and {sup 200}Hg nuclei. It is shown that, in any nuclei, radiative strength functions are a dynamical quantity and that the values of k(E1) + k(M1) for specific energies of gamma transitions and specific nuclei are determined by the structure of decaying and excited levels, at least up to the neutron binding energy B{sub n}.

9. A trigonometric interval method for dynamic response analysis of uncertain nonlinear systems

Liu, ZhuangZhuang; Wang, TianShu; Li, JunFeng

2015-04-01

This paper proposes a new non-intrusive trigonometric polynomial approximation interval method for the dynamic response analysis of nonlinear systems with uncertain-but-bounded parameters and/or initial conditions. This method provides tighter solution ranges compared to the existing approximation interval methods. We consider trigonometric approximation polynomials of three types: both cosine and sine functions, the sine function, and the cosine function. Thus, special interval arithmetic for trigonometric function without overestimation can be used to obtain interval results. The interval method using trigonometric approximation polynomials with a cosine functional form exhibits better performance than the existing Taylor interval method and Chebyshev interval method. Finally, two typical numerical examples with nonlinearity are applied to demonstrate the effectiveness of the proposed method.

10. [Birth interval differentials in Rwanda].

PubMed

Ilinigumugabo, A

1992-01-01

Data from the 1983 Rwanda Fertility Survey are the basis for this study of variations in birth intervals. An analysis of the quality of the Rwandan birth data showed it to be relatively good. The life table technique utilized in this study is explained in a section on methodology, which also describes the Rwanda Fertility Survey questionnaires. A comparison of birth intervals in which live born children died before their first birthday or survived the first birthday shows that infant mortality shortens birth intervals by an average of 5 months. The first birth interval was almost 28 months when the oldest child survived, but declined to 23 months when the oldest child died before age 1. The effect of mortality on birth intervals increased with parity, from 5 months for the first birth interval to 5.5 months for the second and third and 6.4 months for subsequent intervals. The differences amounted to 9 or 10 months for women separating at parities under 4 and over 14 months for women separating at parities of 4 or over. Birth intervals generally increased with parity, maternal age, and the duration of the union. But women entering into unions at higher ages had shorter birth intervals. In the absence of infant mortality and dissolution of the union, women attending school beyong the primary level had first birth intervals 6 months shorter on average than other women. Controlling for infant mortality and marital dissolution, women working for wages had average birth intervals of under 2 years for the first 5 births. Father's occupation had a less marked influence on birth intervals. Urban residence was associated with a shortening of the average birth interval by 6 months between the first and second birth and 5 months between the second and third births. In the first 5 births, Tutsi women had birth intervals 1.5 months longer on average than Hutu women. Women in polygamous unions did not have significantly different birth intervals except perhaps among older women

11. Covariant approximation averaging

Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

2015-06-01

We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

12. Fast approximate motif statistics.

PubMed

Nicodème, P

2001-01-01

We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175

13. The Guiding Center Approximation

Pedersen, Thomas Sunn

The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.

14. Monotone Boolean approximation

SciTech Connect

Hulme, B.L.

1982-12-01

This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

15. Teaching Confidence Intervals Using Simulation

ERIC Educational Resources Information Center

Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

2008-01-01

Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

16. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

17. Explorations in Statistics: Confidence Intervals

ERIC Educational Resources Information Center

Curran-Everett, Douglas

2009-01-01

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

18. A Review of Confidence Intervals.

ERIC Educational Resources Information Center

Mauk, Anne-Marie Kimbell

This paper summarizes information leading to the recommendation that statistical significance testing be replaced, or at least accompanied by, the reporting of effect sizes and confidence intervals. It discusses the use of confidence intervals, noting that the recent report of the American Psychological Association Task Force on Statistical…

19. Children's Discrimination of Melodic Intervals.

ERIC Educational Resources Information Center

Schellenberg, E. Glenn; Trehub, Sandra E.

1996-01-01

Adults and children listened to tone sequences and were required to detect changes either from intervals with simple frequency ratios to intervals with complex ratios or vice versa. Adults performed better on changes from simple to complex ratios than on the reverse changes. Similar performance was observed for 6-year olds who had never taken…

20. Approximate method for estimating plasma ionization characteristics based on numerical simulation of the dynamics of a plasma bunch with a high specific energy in the upper ionosphere

Motorin, A. A.; Stupitsky, E. L.; Kholodov, A. S.

2016-07-01

The spatiotemporal pattern for the development of a plasma cloud formed in the ionosphere and the main cloud gas-dynamic characteristics have been obtained from 3D calculations of the explosion-type plasmodynamic flows previously performed by us. An approximate method for estimating the plasma temperature and ionization degree with the introduction of the effective adiabatic index has been proposed based on these results.

1. VARIABLE TIME-INTERVAL GENERATOR

DOEpatents

Gross, J.E.

1959-10-31

This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

2. Charge-transfer correction for improved time-dependent local density approximation excited-state potential energy curves: Analysis within the two-level model with illustration for H2 and LiH

Casida, Mark E.; Gutierrez, Fabien; Guan, Jingang; Gadea, Florent-Xavier; Salahub, Dennis; Daudey, Jean-Pierre

2000-11-01

Time-dependent density-functional theory (TDDFT) is an increasingly popular approach for calculating molecular excitation energies. However, the TDDFT lowest triplet excitation energy, ωT, of a closed-shell molecule often falls rapidly to zero and then becomes imaginary at large internuclear distances. We show that this unphysical behavior occurs because ωT2 must become negative wherever symmetry breaking lowers the energy of the ground state solution below that of the symmetry unbroken solution. We use the fact that the ΔSCF method gives a qualitatively correct first triplet excited state to derive a "charge-transfer correction" (CTC) for the time-dependent local density approximation (TDLDA) within the two-level model and the Tamm-Dancoff approximation (TDA). Although this correction would not be needed for the exact exchange-correlation functional, it is evidently important for a correct description of molecular excited state potential energy surfaces in the TDLDA. As a byproduct of our analysis, we show why TDLDA and LDA ΔSCF excitation energies are often very similar near the equilibrium geometries. The reasoning given here is fairly general and it is expected that similar corrections will be needed in the case of generalized gradient approximations and hybrid functionals.

3. Approximating Integrals Using Probability

ERIC Educational Resources Information Center

Maruszewski, Richard F., Jr.; Caudle, Kyle A.

2005-01-01

As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

4. Multicriteria approximation through decomposition

SciTech Connect

Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

1997-12-01

The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

5. Multicriteria approximation through decomposition

SciTech Connect

Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

1998-06-01

The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

6. The Proximity Force Approximation for the Casimir Energy of Plate-Sphere and Sphere-Sphere Systems in the Presence of One Extra Compactified Universal Dimension

Cheng, Hongbo

2015-08-01

The Casimir energies for plate-sphere system and sphere-sphere systems under PFA in the presence of one extra compactified universal dimension are analyzed. We find that the Casimir energy between a plate and a sphere in the case of sphere-based PFA is divergent. The Casimir energy of plate-sphere system in the case of plate-based PFA is finite and keeps negative. The extra-dimension corrections to the Casimir energy will be more manifest if the sphere is larger or farther away from the plate. It is shown that the negative Casimir energy for two spheres is also associated with the sizes of spheres and extra space. The larger spheres and the longer distance between them make the influence from the additional dimension stronger.

7. Optimizing the Zeldovich approximation

NASA Technical Reports Server (NTRS)

Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

1994-01-01

We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

8. Approximate Bruechner orbitals in electron propagator calculations

SciTech Connect

Ortiz, J.V.

1999-12-01

Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

9. Image magnification using interval information.

PubMed

Jurio, Aranzazu; Pagola, Miguel; Mesiar, Radko; Beliakov, Gleb; Bustince, Humberto

2011-11-01

In this paper, a simple and effective image-magnification algorithm based on intervals is proposed. A low-resolution image is magnified to form a high-resolution image using a block-expanding method. Our proposed method associates each pixel with an interval obtained by a weighted aggregation of the pixels in its neighborhood. From the interval and with a linear K(α) operator, we obtain the magnified image. Experimental results show that our algorithm provides a magnified image with better quality (peak signal-to-noise ratio) than several existing methods. PMID:21632304

10. TIME-INTERVAL MEASURING DEVICE

DOEpatents

Gross, J.E.

1958-04-15

An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

11. Simple Interval Timers for Microcomputers.

ERIC Educational Resources Information Center

McInerney, M.; Burgess, G.

1985-01-01

Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

12. The {sup 2}H(d,p){sup 3}H Reaction At Astrophysical Energies Studied Via The Trojan Horse Method And Pole Approximation Validity Test

SciTech Connect

Sparta, R.; Pizzone, R. G.; Spitaleri, C.; Cherubini, S.; Crucilla, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Puglia, S. M. R.; Rapisarda, G. G.; Romano, S.; Sergi, M. L.; Aliotta, M.; Burjan, V.; Hons, Z.; Kroha, V.; Mrazek, J.; Kiss, G.; McCleskey, M.; Trache, L.

2010-03-01

In order to understand primordial and stellar nucleosynthesis, we have studied {sup 2}H(d,p){sup 3}H reaction at 0, 4 MeV down to astrophysical energies. Knowledge of this S-factor is interesting also to plan reactions for fusion reactors to produce energy. {sup 2}H(d,p){sup 3}H has been studied through the Trojan Horse Method applied to the three-body reaction {sup 2}H({sup 3}He,pt)H, at a beam energy of 17 MeV. Once protons and tritons are detected in coincidence and the quasi-free events are selected, the obtained S-factor has been compared with direct reactions results. Such data are in agreement with the direct ones, and a pole invariance test has been obtained comparing the present result with another {sup 2}H(d,p){sup 3}H THM one, performed with a different spectator particle (see fig. 1).

13. Fermion tunneling beyond semiclassical approximation

Majhi, Bibhas Ranjan

2009-02-01

Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

SciTech Connect

Perdew, J.P.; Burke, K.; Ernzerhof, M.

1996-10-01

Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

15. Approximate option pricing

SciTech Connect

Chalasani, P.; Saias, I.; Jha, S.

1996-04-08

As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

16. Beyond the Kirchhoff approximation

NASA Technical Reports Server (NTRS)

Rodriguez, Ernesto

1989-01-01

The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.

17. Analysis of experimental data on doublet neutron-deuteron scattering at energies below the deuteron-breakup threshold on the basis of the pole approximation of the effective-range function

SciTech Connect

Babenko, V. A.; Petrov, N. M.

2008-01-15

On the basis of the Bargmann representation of the S matrix, the pole approximation is obtained for the effective-range function k cot {delta}. This approximation is optimal for describing the neutron-deuteron system in the doublet spin state. The values of r{sub 0} = 412.469 fm and v{sub 2} = -35 495.62 fm{sup 3} for the doublet low-energy parameters of neutron-deuteron scattering and the value of D = 172.678 fm{sup 2} for the respective pole parameter are deduced by using experimental results for the triton binding energy E{sub T}, the doublet neutron-deuteron scattering length a{sub 2}, and van Oers-Seagrave phase shifts at energies below the deuteron-breakup threshold. With these parameters, the pole approximation of the effective-range function provides a highly precise description (the relative error does not exceed 1%) of the doublet phase shift for neutron-deuteron scattering at energies below the deuteron-breakup threshold. Physical properties of the triton in the ground (T) and virtual (v) states are calculated. The results are B{sub v} = 0.608 MeV for the virtuallevel position and C{sub T}{sup 2} = 2.866 and C{sub v}{sup 2} = 0.0586 for the dimensionless asymptotic normalization constants. It is shown that, in the Whiting-Fuda approximation, the values of physical quantities characterizing the triton virtual state are determined to a high precision by one parameter, the doublet neutron-deuteron scattering length a{sub 2}. The effective triton radii in the ground ({rho}{sub T} = 1.711 fm) and virtual ({rho}{sub v} = 74.184 fm) states are calculated for the first time.

18. An Assessment of Density Functional Methods for Potential Energy Curves of Nonbonded Interactions: The XYG3 and B97-D Approximations

SciTech Connect

Vazquez-Mayagoitia, Alvaro; Sherrill, David; Apra, Edoardo; Sumpter, Bobby G

2010-01-01

A recently proposed double-hybrid functional called XYG3 and a semilocal GGA functional (B97-D) with a semiempirical correction for van der Waals interactions have been applied to study the potential energy curves along the dissociation coordinates of weakly bound pairs of molecules governed by London dispersion and induced dipole forces. Molecules treated in this work were the parallel sandwich, T-shaped, and parallel-displaced benzene dimer, (C6H6)2; hydrogen sulfide and benzene, H2S C6H6; methane and benzene, CH4 C6H6; the methane dimer, (CH4)2; and the pyridine dimer, (C5H5N)2. We compared the potential energy curves of these functionals with previously published benchmarks at the coupled cluster singles, doubles, and perturbative triplets [CCSD(T)] complete-basis-set limit. Both functionals, XYG3 and B97-D, exhibited very good performance, reproducing accurate energies for equilibrium distances and a smooth behavior along the dissociation coordinate. Overall, we found an agreement within a few tenths of one kcal mol-1 with the CCSD(T) results across the potential energy curves.

19. Astronomical telescope for photons-gamma rays of low energy (approximately 4 MeV using the difference method like a Venetian blind

de Aguiar, O. D.; Martin, I. M.

1980-07-01

A description of a gamma ray telescope, which is sensitive to photons in the energy range of 3 - 10 MeV is presented. Collimation was provided by a passive shield which functioned somewhat like a 'venetian blind' to block the signal from one of the detectors. Signal subtraction techniques were used to obtain the desired information.

20. QT interval in anorexia nervosa.

PubMed Central

Cooke, R A; Chambers, J B; Singh, R; Todd, G J; Smeeton, N C; Treasure, J; Treasure, T

1994-01-01

OBJECTIVES--To determine the incidence of a long QT interval as a marker for sudden death in patients with anorexia nervosa and to assess the effect of refeeding. To define a long QT interval by linear regression analysis and estimation of the upper limit of the confidence interval (95% CI) and to compare this with the commonly used Bazett rate correction formula. DESIGN--Prospective case control study. SETTING--Tertiary referral unit for eating disorders. SUBJECTS--41 consecutive patients with anorexia nervosa admitted over an 18 month period. 28 age and sex matched normal controls. MAIN OUTCOME MEASURES--maximum QT interval measured on 12 lead electrocardiograms. RESULTS--43.6% of the variability in the QT interval was explained by heart rate alone (p < 0.00001) and group analysis contributed a further 5.9% (p = 0.004). In 6 (15%) patients the QT interval was above the upper limit of the 95% CI for the prediction based on the control equation (NS). Two patients died suddenly; both had a QT interval at or above the upper limit of the 95% CI. In patients who reached their target weights the QT interval was significantly shorter (median 9.8 ms; p = 0.04) relative to the upper limit of the 60% CI of the control regression line, which best discriminated between patients and controls. The median Bazett rate corrected QT interval (QTc) in patients and controls was 435 v 405 ms.s-1/2 (p = 0.0004), and before and after refeeding it was 435 v 432 ms.s1/2 (NS). In 14(34%) patients and three (11%) controls the QTc was > 440 ms.s-1/2 (p = 0.053). CONCLUSIONS--The QT interval was longer in patients with anorexia nervosa than in age and sex matched controls, and there was a significant tendency to reversion to normal after refeeding. The Bazett rate correction formula overestimated the number of patients with QT prolongation and also did not show an improvement with refeeding. PMID:8068473

1. Tuning for temporal interval in human apparent motion detection.

PubMed

Bours, Roger J E; Stuur, Sanne; Lankheet, Martin J M

2007-01-01

Detection of apparent motion in random dot patterns requires correlation across time and space. It has been difficult to study the temporal requirements for the correlation step because motion detection also depends on temporal filtering preceding correlation and on integration at the next levels. To specifically study tuning for temporal interval in the correlation step, we performed an experiment in which prefiltering and postintegration were held constant and in which we used a motion stimulus containing coherent motion for a single interval value only. The stimulus consisted of a sparse random dot pattern in which each dot was presented in two frames only, separated by a specified interval. On each frame, half of the dots were refreshed and the other half was a displaced reincarnation of the pattern generated one or several frames earlier. Motion energy statistics in such a stimulus do not vary from frame to frame, and the directional bias in spatiotemporal correlations is similar for different interval settings. We measured coherence thresholds for left-right direction discrimination by varying motion coherence levels in a Quest staircase procedure, as a function of both step size and interval. Results show that highest sensitivity was found for an interval of 17-42 ms, irrespective of viewing distance. The falloff at longer intervals was much sharper than previously described. Tuning for temporal interval was largely, but not completely, independent of step size. The optimal temporal interval slightly decreased with increasing step size. Similarly, the optimal step size decreased with increasing temporal interval. PMID:17461670

2. Exponential Approximations Using Fourier Series Partial Sums

NASA Technical Reports Server (NTRS)

Banerjee, Nana S.; Geer, James F.

1997-01-01

The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

3. Approximate recalculation of the {alpha}(Z{alpha}){sup 5} contribution to the self-energy effect on hydrogenic states with a multipole expansion

SciTech Connect

Zamastil, J.

2013-01-15

A contribution of virtual electron states with large wave numbers to the self-energy of an electron bound in the weak Coulomb field is analyzed in the context of the evaluation method suggested in the previous paper. The contribution is of the order {alpha}(Z{alpha}){sup 5}. The same value for this contribution is found here as the one found in the previous calculations using different evaluation methods. When we add the remaining terms of the order {alpha}(Z{alpha}){sup 5} to the calculation of the self-energy effect in hydrogen-like ions presented in the previous paper we find a very good agreement with numerical evaluations. The relative difference between present and numerical evaluations ranges from 2 parts in 10{sup 6} for Z=1 up to 6 parts in 10{sup 4} for Z=10. - Highlights: Black-Right-Pointing-Pointer The complete terms of the order {alpha}(Z{alpha}){sup 5} are identified. Black-Right-Pointing-Pointer The accuracy of the result for the ground state of the hydrogen is 2 ppm. Black-Right-Pointing-Pointer The separation into the low and high energy regions and their matching is avoided.

4. Exclusive experiment on nuclei with backward emitted particles by electron-nucleus collision in {approximately} 10 GeV energy range

SciTech Connect

Saito, T.; Takagi, F.

1994-04-01

Since the evidence of strong cross section in proton-nucleus backward scattering was presented in the early of 1970 years, this phenomena have been interested from the point of view to be related to information on the short range correlation between nucleons or on high momentum components of the wave function of the nucleus. In the analysis of the first experiment on protons from the carbon target under bombardment by 1.5-5.7 GeV protons, indications are found of an effect analogous to scaling in high-energy interactions of elementary particles with protons. Moreover it is found that the function f(p{sup 2})/{sigma}{sub tot}, which describes the spectra of the protons and deuterons emitted backward from nuclei in the laboratory system, does not depend on the energy and the type of the incident particle or on the atomic number of the target nucleus. In the following experiments the spectra of the protons emitted from the nuclei C, Al, Ti, Cu, Cd and Pb were measured in the inclusive reactions with incident particles of negative pions (1.55-6.2 GeV/c) and protons (6.2-9.0 GeV/C). The cross section f is described by f = E/p{sup 2} d{sup 2}{sigma}/dpd{Omega} = C exp ({minus}Bp{sup 2}), where p is the momentum of hadron. The function f depends linearly on the atomic weight A of the target nuclei. The slope parameter B is independent of the target nucleus and of the sort and energy of the bombarding particles. The invariant cross section {rho} = f/{sigma}{sub tot} is also described by exponential A{sub 0} exp ({minus}A{sub 1p}{sup 2}), where p becomes independent of energy at initial particle energies {ge} 1.5 GeV for C nucleus and {ge} 5 GeV for the heaviest of the investigated Pb nuclei.

5. Partitioned-Interval Quantum Optical Communications Receiver

NASA Technical Reports Server (NTRS)

Vilnrotter, Victor A.

2013-01-01

The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

6. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures.

PubMed

Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K

2016-01-01

For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

7. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

PubMed Central

Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

2016-01-01

For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

8. High resolution time interval meter

DOEpatents

Martin, A.D.

1986-05-09

Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

9. The Role of Higher Harmonics In Musical Interval Perception

Krantz, Richard; Douthett, Jack

2011-10-01

Using an alternative parameterization of the roughness curve we make direct use of critical band results to investigate the role of higher harmonics on the perception of tonal consonance. We scale the spectral amplitudes in the complex home tone and complex interval tone to simulate acoustic signals of constant energy. Our analysis reveals that even with a relatively small addition of higher harmonics the perfect fifth emerges as a consonant interval with more, musically important, just intervals emerging as consonant as more and more energy is shifted into higher frequencies.

10. Updating representations of temporal intervals.

PubMed

Danckert, James; Anderson, Britt

2015-12-01

Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings. PMID:26303026

11. Countably QC-Approximating Posets

PubMed Central

Mao, Xuxin; Xu, Luoshan

2014-01-01

As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

12. Approximate Bayesian multibody tracking.

PubMed

Lanz, Oswald

2006-09-01

Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

13. Uniform Continuity on Unbounded Intervals

ERIC Educational Resources Information Center

Pouso, Rodrigo Lopez

2008-01-01

We present a teaching approach to uniform continuity on unbounded intervals which, hopefully, may help to meet the following pedagogical objectives: (i) To provide students with efficient and simple criteria to decide whether a continuous function is also uniformly continuous; and (ii) To provide students with skill to recognize graphically…

14. Approximation by hinge functions

SciTech Connect

Faber, V.

1997-05-01

Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newtons method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

15. Rapidly converging series approximation to Kepler's equation

Peters, R. D.

1984-08-01

A power series solution in eccentricity e and normalized mean anomaly f has been developed for elliptic orbits. Expansion through the fourth order yields approximate errors about an order of magnitude smaller than the corresponding Lagrange series. For large e, a particular algorithm is shown to be superior to published initializers for Newton iteration solutions. The normalized variable f varies between zero and one on each of two separately defined intervals: 0 to x = (pi/2-e) and x to pi. The expansion coefficients are polynomials based on a one-time evaluation of sine and cosine terms in f.

16. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: Linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation

Isegawa, Miho; Truhlar, Donald G.

2013-04-01

Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

17. An approximation based global optimization strategy for structural synthesis

NASA Technical Reports Server (NTRS)

Sepulveda, A. E.; Schmit, L. A.

1991-01-01

A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

18. Approximating maximum clique with a Hopfield network.

PubMed

Jagota, A

1995-01-01

In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357

19. Fourier Analysis of Musical Intervals

LoPresto, Michael C.

2008-11-01

Use of a microphone attached to a computer to capture musical sounds and software to display their waveforms and harmonic spectra has become somewhat commonplace. A recent article in The Physics Teacher aptly demonstrated the use of MacScope2 in just such a manner as a way to teach Fourier analysis.3 A logical continuation of this project is to use MacScope not just to analyze the Fourier composition of musical tones but also musical intervals.

20. Approximation Schemes for Scheduling with Availability Constraints

Fu, Bin; Huo, Yumei; Zhao, Hairong

We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.

1. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

USGS Publications Warehouse

Christensen, S.; Cooley, R.L.

1996-01-01

Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

2. Successive intervals analysis of preference measures in a health status index.

PubMed Central

Blischke, W R; Bush, J W; Kaplan, R M

1975-01-01

The method of successive intervals, a procedure for obtaining equal intervals from category data, is applied to social preference data for a health status index. Several innovations are employed, including an approximate analysis of variance test for determining whether the intervals are of equal width, a regression model for estimating the width of the end intervals in finite scales, and a transformation to equalize interval widths and estimate item locations on the new scale. A computer program has been developed to process large data sets with a larger number of categories than previous programs. PMID:1219005

3. An Event Restriction Interval Theory of Tense

ERIC Educational Resources Information Center

Beamer, Brandon Robert

2012-01-01

This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

4. Confidence Intervals for Error Rates Observed in Coded Communications Systems

Hamkins, J.

2015-05-01

We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

5. Symmetric approximations of the Navier-Stokes equations

SciTech Connect

Kobel'kov, G M

2002-08-31

A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as {epsilon}{yields}0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.

6. Symmetric approximations of the Navier-Stokes equations

Kobel'kov, G. M.

2002-08-01

A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as \\varepsilon\\to0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.

7. Practical Scheffe-type credibility intervals for variables of a groundwater model

USGS Publications Warehouse

Cooley, R.L.

1999-01-01

Simultaneous Scheffe-type credibility intervals (the Bayesian version of confidence intervals) for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were derived by Cooley [1993b]. It was assumed that variances reflecting the expected differences between observed and model-computed quantities used to calibrate the model are known, whereas they would often be unknown for an actual model. In this study the variances are regarded as unknown, and variance variability from observation to observation is approximated by grouping the data so that each group is characterized by a uniform variance. The credibility intervals are calculated from the posterior distribution, which was developed by considering each group variance to be a random variable about which nothing is known a priori, then eliminating it by integration. Numerical experiments using two test problems illustrate some characteristics of the credibility intervals. Nonlinearity of the statistical model greatly affected some of the credibility intervals, indicating that credibility intervals computed using the standard linear model approximation may often be inadequate to characterize uncertainty for actual field problems. The parameter characterizing the probability level for the credibility intervals was, however, accurately computed using a linear model approximation, as compared with values calculated using second-order and fully nonlinear formulations. This allows the credibility intervals to be computed very efficiently.Simultaneous Scheffe-type credibility intervals for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were developed. The variances reflecting the expected differences between the observed and model-computed quantities were unknown, and variance variability from observation to observation was approximated by grouping the data so that each group was characterized by a uniform variance. Nonlinearity

8. CONFIDENCE INTERVALS FOR A CROP YIELD LOSS FUNCTION IN NONLINEAR REGRESSION

EPA Science Inventory

Quantifying the relationship between chronic pollutant exposure and the ensuing biological response requires consideration of nonlinear functions that are flexible enough to generate a wide range of response curves. he linear approximation (i.e., Wald's) interval estimates for oz...

9. Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

ERIC Educational Resources Information Center

Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

2006-01-01

Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

10. Confidence Intervals for True Scores Using the Skew-Normal Distribution

ERIC Educational Resources Information Center

Garcia-Perez, Miguel A.

2010-01-01

A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

11. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

SciTech Connect

Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

2013-08-01

To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

12. CONFIDENCE INTERVALS AND CURVATURE MEASURES IN NONLINEAR REGRESSION USING THE IML AND NLIN PROCEDURES IN SAS SOFTWARE

EPA Science Inventory

Interval estimates for nonlinear parameters using the linear approximation are sensitive to parameter curvature effects. he adequacy of the linear approximation (Wald) interval is determined using the nonlinearity measures of Bates and Watts (1980), and Clarke (1987b), and the pr...

13. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

ERIC Educational Resources Information Center

Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

2005-01-01

Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

14. Analytic approximations to the modon dispersion relation. [in oceanography

NASA Technical Reports Server (NTRS)

Boyd, J. P.

1981-01-01

Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.

15. Non-intrusive hybrid interval method for uncertain nonlinear systems using derivative information

Liu, Zhuang-Zhuang; Wang, Tian-Shu; Li, Jun-Feng

2016-02-01

This paper proposes a new non-intrusive hybrid interval method using derivative information for the dynamic response analysis of nonlinear systems with uncertain-but-bounded parameters and/or initial conditions. This method provides tighter solution ranges compared to the existing polynomial approximation interval methods. Interval arithmetic using the Chebyshev basis and interval arithmetic using the general form modified affine basis for polynomials are developed to obtain tighter bounds for interval computation. To further reduce the overestimation caused by the "wrapping effect" of interval arithmetic, the derivative information of dynamic responses is used to achieve exact solutions when the dynamic responses are monotonic with respect to all the uncertain variables. Finally, two typical numerical examples with nonlinearity are applied to demonstrate the effectiveness of the proposed hybrid interval method, in particular, its ability to effectively control the overestimation for specific timepoints.

16. An improved proximity force approximation for electrostatics

SciTech Connect

Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

2012-08-15

A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

17. Second derivatives for approximate spin projection methods

SciTech Connect

Thompson, Lee M.; Hratchian, Hrant P.

2015-02-07

The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

18. Hubbard-U corrected Hamiltonians for non-self-consistent random-phase approximation total-energy calculations: A study of ZnS, TiO2, and NiO

Patrick, Christopher E.; Thygesen, Kristian S.

2016-01-01

In non-self-consistent calculations of the total energy within the random-phase approximation (RPA) for electronic correlation, it is necessary to choose a single-particle Hamiltonian whose solutions are used to construct the electronic density and noninteracting response function. Here we investigate the effect of including a Hubbard-U term in this single-particle Hamiltonian, to better describe the on-site correlation of 3 d electrons in the transition metal compounds ZnS, TiO2, and NiO. We find that the RPA lattice constants are essentially independent of U , despite large changes in the underlying electronic structure. We further demonstrate that the non-self-consistent RPA total energies of these materials have minima at nonzero U . Our RPA calculations find the rutile phase of TiO2 to be more stable than anatase independent of U , a result which is consistent with experiments and qualitatively different from that found from calculations employing U -corrected (semi)local functionals. However we also find that the +U term cannot be used to correct the RPA's poor description of the heat of formation of NiO.

19. New approach to description of (d,xn) spectra at energies below 50 MeV in Monte Carlo simulation by intra-nuclear cascade code with Distorted Wave Born Approximation

Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.

2014-08-01

A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.

20. An approximate classical unimolecular reaction rate theory

Zhao, Meishan; Rice, Stuart A.

1992-05-01

We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

1. Min and Max Extreme Interval Values

ERIC Educational Resources Information Center

Jance, Marsha L.; Thomopoulos, Nick T.

2011-01-01

The paper shows how to find the min and max extreme interval values for the exponential and triangular distributions from the min and max uniform extreme interval values. Tables are provided to show the min and max extreme interval values for the uniform, exponential, and triangular distributions for different probabilities and observation sizes.

2. Familiarity-Frequency Ratings of Melodic Intervals

ERIC Educational Resources Information Center

Jeffries, Thomas B.

1972-01-01

Objective of this study was to determine subjects' reliability in rating randomly played ascending and descending melodic intervals within the octave on the basis of their familiarity with each type of interval and the frequency of their having experienced each type of interval in music. (Author/CB)

3. Cavity approximation for graphical models.

PubMed

Rizzo, T; Wemmenhove, B; Kappen, H J

2007-07-01

We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

4. Approximate circuits for increased reliability

SciTech Connect

Hamlet, Jason R.; Mayo, Jackson R.

2015-08-18

Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

5. Approximate circuits for increased reliability

SciTech Connect

Hamlet, Jason R.; Mayo, Jackson R.

2015-12-22

Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

6. Structural optimization with approximate sensitivities

NASA Technical Reports Server (NTRS)

Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

1994-01-01

Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

7. Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis

PubMed Central

Nie, Yuxin; Zou, Jianzhou; Liang, Yixiu; Shen, Bo; Liu, Zhonghua; Cao, Xuesen; Chen, Xiaohong; Ding, Xiaoqiang

2016-01-01

Background Sudden cardiac death is one of the primary causes of mortality in chronic hemodialysis (HD) patients. Prolonged QTc interval is associated with increased rate of sudden cardiac death. The aim of this article is to assess the abnormalities found in electrocardiograms (ECGs), and to explore factors that can influence the QTc interval. Methods A total of 141 conventional HD patients were enrolled in this study. ECG tests were conducted on each patient before a single dialysis session and 15 minutes before the end of dialysis session (at peak stress). Echocardiography tests were conducted before dialysis session began. Blood samples were drawn by phlebotomy immediately before and after the dialysis session. Results Before dialysis, 93.62% of the patients were in sinus rhythm, and approximately 65% of the patients showed a prolonged QTc interval (i.e., a QTc interval above 440 ms in males and above 460ms in females). A comparison of ECG parameters before dialysis and at peak stress showed increases in heart rate (77.45±11.92 vs. 80.38±14.65 bpm, p = 0.001) and QTc interval (460.05±24.53 ms vs. 470.93±24.92 ms, p<0.001). After dividing patients into two groups according to the QTc interval, lower pre-dialysis serum concentrations of potassium (K+), calcium (Ca2+), phosphorus, calcium* phosphorus (Ca*P), and higher concentrations of plasma brain natriuretic peptide (BNP) were found in the group with prolonged QTc intervals. Patients in this group also had a larger left atrial diameter (LAD) and a thicker interventricular septum, and they tended to be older than patients in the other group. Then patients were divided into two groups according to ΔQTc (ΔQTc = QTc peak-stress- QTc pre-HD). When analyzing the patients whose QTc intervals were longer at peak stress than before HD, we found that they had higher concentrations of Ca2+ and P5+ and lower concentrations of K+, ferritin, UA, and BNP. They were also more likely to be female. In addition, more cardiac

8. Dissimilar Physiological and Perceptual Responses Between Sprint Interval Training and High-Intensity Interval Training.

PubMed

Wood, Kimberly M; Olive, Brittany; LaValle, Kaylyn; Thompson, Heather; Greer, Kevin; Astorino, Todd A

2016-01-01

High-intensity interval training (HIIT) and sprint interval training (SIT) elicit similar cardiovascular and metabolic adaptations vs. endurance training. No study, however, has investigated acute physiological changes during HIIT vs. SIT. This study compared acute changes in heart rate (HR), blood lactate concentration (BLa), oxygen uptake (VO2), affect, and rating of perceived exertion (RPE) during HIIT and SIT. Active adults (4 women and 8 men, age = 24.2 ± 6.2 years) initially performed a VO2max test to determine workload for both sessions on the cycle ergometer, whose order was randomized. Sprint interval training consisted of 8 bouts of 30 seconds of all-out cycling at 130% of maximum Watts (Wmax). High-intensity interval training consisted of eight 60-second bouts at 85% Wmax. Heart rate, VO2, BLa, affect, and RPE were continuously assessed throughout exercise. Repeated-measures analysis of variance revealed a significant difference between HIIT and SIT for VO2 (p < 0.001), HR (p < 0.001), RPE (p = 0.03), and BLa (p = 0.049). Conversely, there was no significant difference between regimens for affect (p = 0.12). Energy expenditure was significantly higher (p = 0.02) in HIIT (209.3 ± 40.3 kcal) vs. SIT (193.5 ± 39.6 kcal). During HIIT, subjects burned significantly more calories and reported lower perceived exertion than SIT. The higher VO2 and lower BLa in HIIT vs. SIT reflected dissimilar metabolic perturbation between regimens, which may elicit unique long-term adaptations. If an individual is seeking to burn slightly more calories, maintain a higher oxygen uptake, and perceive less exertion during exercise, HIIT is the recommended routine. PMID:26691413

9. Approximate Genealogies Under Genetic Hitchhiking

PubMed Central

Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

2006-01-01

The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

10. Confidence Intervals in Qtl Mapping by Bootstrapping

PubMed Central

Visscher, P. M.; Thompson, R.; Haley, C. S.

1996-01-01

The determination of empirical confidence intervals for the location of quantitative trait loci (QTLs) was investigated using simulation. Empirical confidence intervals were calculated using a bootstrap resampling method for a backcross population derived from inbred lines. Sample sizes were either 200 or 500 individuals, and the QTL explained 1, 5, or 10% of the phenotypic variance. The method worked well in that the proportion of empirical confidence intervals that contained the simulated QTL was close to expectation. In general, the confidence intervals were slightly conservatively biased. Correlations between the test statistic and the width of the confidence interval were strongly negative, so that the stronger the evidence for a QTL segregating, the smaller the empirical confidence interval for its location. The size of the average confidence interval depended heavily on the population size and the effect of the QTL. Marker spacing had only a small effect on the average empirical confidence interval. The LOD drop-off method to calculate empirical support intervals gave confidence intervals that generally were too small, in particular if confidence intervals were calculated only for samples above a certain significance threshold. The bootstrap method is easy to implement and is useful in the analysis of experimental data. PMID:8725246

11. Approximation methods in gravitational-radiation theory

NASA Technical Reports Server (NTRS)

Will, C. M.

1986-01-01

The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

12. Military Applicability of Interval Training for Health and Performance.

PubMed

Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C

2015-11-01

Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus. PMID:26506197

13. Mathematical algorithms for approximate reasoning

NASA Technical Reports Server (NTRS)

Murphy, John H.; Chay, Seung C.; Downs, Mary M.

1988-01-01

Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

14. Exponential approximations in optimal design

NASA Technical Reports Server (NTRS)

Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

1990-01-01

One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

15. Approximate factorization with source terms

NASA Technical Reports Server (NTRS)

Shih, T. I.-P.; Chyu, W. J.

1991-01-01

A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.

16. Spectrally Invariant Approximation within Atmospheric Radiative Transfer

NASA Technical Reports Server (NTRS)

Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

2011-01-01

Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

17. Approximation of Failure Probability Using Conditional Sampling

NASA Technical Reports Server (NTRS)

Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

2008-01-01

In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

18. Intervals in evolutionary algorithms for global optimization

SciTech Connect

Patil, R.B.

1995-05-01

Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

19. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

2014-05-01

Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

20. Microscopic justification of the equal filling approximation

SciTech Connect

Perez-Martin, Sara; Robledo, L. M.

2008-07-15

The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

1. Photoelectron spectroscopy and the dipole approximation

SciTech Connect

Hemmers, O.; Hansen, D.L.; Wang, H.

1997-04-01

Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

2. Virial expansion coefficients in the harmonic approximation.

PubMed

Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S

2012-08-01

The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730

3. Wavelet Sparse Approximate Inverse Preconditioners

NASA Technical Reports Server (NTRS)

Chan, Tony F.; Tang, W.-P.; Wan, W. L.

1996-01-01

There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

4. Approximate entropy of network parameters.

PubMed

West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

2012-04-01

We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542

5. Approximate entropy of network parameters

West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

2012-04-01

We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.

6. Interval velocity analysis using wave field continuation

SciTech Connect

Zhusheng, Z. )

1992-01-01

In this paper, the author proposes a new interval velocity inversion method which, based on wave field continuation theory and fuzzy decision theory, uses CMP seismic gathers to automatically estimate interval velocity and two-way travel time in layered medium. The interval velocity calculated directly from wave field continuation is not well consistent with that derived from VSP data, the former is usually higher than the latter. Three major factors which influence the accuracy of interval velocity from wave field continuation are corrected, so that the two kinds of interval velocity are well consistent. This method brings better interval velocity, adapts weak reflection waves and resists noise well. It is a feasible method.

7. Capacitated max -Batching with Interval Graph Compatibilities

Nonner, Tim

We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the weight of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving a long-standing open problem, we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case.

8. A model of interval timing by neural integration

PubMed Central

Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

2011-01-01

We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

9. Gadgets, approximation, and linear programming

SciTech Connect

Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

1996-12-31

We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

10. Interval Management Display Design Study

NASA Technical Reports Server (NTRS)

Baxley, Brian T.; Beyer, Timothy M.; Cooke, Stuart D.; Grant, Karlus A.

2014-01-01

In 2012, the Federal Aviation Administration (FAA) estimated that U.S. commercial air carriers moved 736.7 million passengers over 822.3 billion revenue-passenger miles. The FAA also forecasts, in that same report, an average annual increase in passenger traffic of 2.2 percent per year for the next 20 years, which approximates to one-and-a-half times the number of today's aircraft operations and passengers by the year 2033. If airspace capacity and throughput remain unchanged, then flight delays will increase, particularly at those airports already operating near or at capacity. Therefore it is critical to create new and improved technologies, communications, and procedures to be used by air traffic controllers and pilots. National Aeronautics and Space Administration (NASA), the FAA, and the aviation industry are working together to improve the efficiency of the National Airspace System and the cost to operate in it in several ways, one of which is through the creation of the Next Generation Air Transportation System (NextGen). NextGen is intended to provide airspace users with more precise information about traffic, routing, and weather, as well as improve the control mechanisms within the air traffic system. NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) Project is designed to contribute to the goals of NextGen, and accomplishes this by integrating three NASA technologies to enable fuel-efficient arrival operations into high-density airports. The three NASA technologies and procedures combined in the ATD-1 concept are advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted and fuel efficient arrival streams in high-density terminal airspace.

11. A note on the path interval distance.

PubMed

Coons, Jane Ivy; Rusinko, Joseph

2016-06-01

The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important. PMID:27040521

12. Heat pipe transient response approximation

Reid, Robert S.

2002-01-01

A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .

13. JIMWLK evolution in the Gaussian approximation

Iancu, E.; Triantafyllopoulos, D. N.

2012-04-01

We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

14. Pythagorean Approximations and Continued Fractions

ERIC Educational Resources Information Center

Peralta, Javier

2008-01-01

In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

15. Application of Interval Predictor Models to Space Radiation Shielding

NASA Technical Reports Server (NTRS)

Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

2016-01-01

This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

16. Chemical Laws, Idealization and Approximation

Tobin, Emma

2013-07-01

This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

17. Interval and Contour Processing in Autism

ERIC Educational Resources Information Center

Heaton, Pamela

2005-01-01

High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…

18. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

EPA Science Inventory

Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

19. Product-State Approximations to Quantum States

Brandão, Fernando G. S. L.; Harrow, Aram W.

2016-02-01

We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

20. Interval colorectal carcinoma: An unsolved debate.

PubMed

Benedict, Mark; Galvao Neto, Antonio; Zhang, Xuchen

2015-12-01

Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

1. Constructing Confidence Intervals for Qtl Location

PubMed Central

Mangin, B.; Goffinet, B.; Rebai, A.

1994-01-01

We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

2. Interval colorectal carcinoma: An unsolved debate

PubMed Central

Benedict, Mark; Neto, Antonio Galvao; Zhang, Xuchen

2015-01-01

Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

3. Exercise-induced hypoalgesia - interval versus continuous mode.

PubMed

Kodesh, Einat; Weissman-Fogel, Irit

2014-07-01

Aerobic exercise at approximately 70% of maximal aerobic capacity moderately reduces pain sensitivity and attenuates pain, even after a single session. If the analgesic effects depend on exercise intensity, then high-intensity interval exercise at 85% of maximal aerobic capacity should further reduce pain. The aim of this study was to explore the exercise-induced analgesic effects of high-intensity interval aerobic exercise and to compare them with the analgesic effects of moderate continuous aerobic exercise. Twenty-nine young untrained healthy males were randomly assigned to aerobic-continuous (70% heart rate reserve (HRR)) and interval (4 × 4 min at 85% HRR and 2 min at 60% HRR between cycles) exercise modes, each lasting 30 min. Psychophysical pain tests, pressure and heat pain thresholds (HPT), and tonic heat pain (THP) were conducted before and after exercise sessions. Repeated measures ANOVA was used for data analysis. HPT increased (p = 0.056) and THP decreased (p = 0.013) following exercise unrelated to exercise type. However, the main time effect (pre-/postexercise) was a trend of increased HPT (45.6 ± 1.9 °C to 46.2 ± 1.8 °C; p = 0.082) and a significant reduction in THP (from 50.7 ± 25 to 45.9 ± 25.4 numeric pain scale; p = 0.043) following interval exercise. No significant change was found for the pressure pain threshold following either exercise type. In conclusion, interval exercise (85% HRR) has analgesic effects on experimental pain perception. This, in addition to its cardiovascular, muscular, and metabolic advantages may promote its inclusion in pain management programs. PMID:24773287

4. Analysing organic transistors based on interface approximation

SciTech Connect

Akiyama, Yuto; Mori, Takehiko

2014-01-15

Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

5. Uncertainty relations for approximation and estimation

Lee, Jaeha; Tsutsui, Izumi

2016-05-01

We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.

6. One sign ion mobile approximation

Barbero, G.

2011-12-01

The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.

7. Testing the frozen flow approximation

NASA Technical Reports Server (NTRS)

Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

1993-01-01

We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

8. Numbers whose prime divisors lie in special intervals

Changa, M. E.

2003-08-01

We study the distribution of numbers whose prime divisors lie in special intervals. Various multiplicative functions are summed over these numbers. For these summatory functions we obtain asymptotic formulae whose principal term is a sum of an increasing number of summands. We show that this sum can be approximated, up to the first rejected term, by a finite number of its summands. We also discuss relations on the parameters of the problem under which the principal term of such asymptotic formulae becomes a finite sum.

9. Evaluation of confidence intervals for a steady-state leaky aquifer model

USGS Publications Warehouse

Christensen, S.; Cooley, R.L.

1999-01-01

The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence

10. CONFIDENCE INTERVALS AND STANDARD ERROR INTERVALS: WHAT DO THEY MEAN IN TERMS OF STATISTICAL SIGNIFICANCE?

Technology Transfer Automated Retrieval System (TEKTRAN)

We investigate the use of confidence intervals and standard error intervals to draw conclusions regarding tests of hypotheses about normal population means. Mathematical expressions and algebraic manipulations are given, and computer simulations are performed to assess the usefulness of confidence ...

11. Approximate Counting of Graphical Realizations

PubMed Central

2015-01-01

In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

12. Computer Experiments for Function Approximations

SciTech Connect

Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

2007-10-15

This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

13. Approximate reasoning using terminological models

NASA Technical Reports Server (NTRS)

Yen, John; Vaidya, Nitin

1992-01-01

Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

14. Approximate Counting of Graphical Realizations.

PubMed

Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

2015-01-01

In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

15. Counting independent sets using the Bethe approximation

SciTech Connect

Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

2009-01-01

The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

16. Facies and reservoir characterization of an upper Smackover interval, East Barnett Field, Conecuh County, Alabama

SciTech Connect

Bergan, G.R. ); Hearne, J.H. )

1990-09-01

Excellent production from an upper Smackover (Jurassic) ooid grainstone was established in April 1988 by Coastal Oil and Gas Corporation with the discovery of the East Barnett field in Conecuh County, Alabama. A structure map on the top of the Smackover Formation and net porosity isopach map of the producing intervals show that the trapping mechanism at the field has both structural and stratigraphic components. Two diamond cores were cut from 13,580 to 13,701 ft, beginning approximately 20 ft below the top of the Smackover. Two shallowing-upward sequences are identified in the cores. The first sequence starts at the base of the cored interval and is characterized by thick, subtidal algal boundstones capped by a collapse breccia facies. This entire sequence was deposited in the shallow subtidal to lower intertidal zone. Subsequent lowering of sea level exposed the top portion of the boundstones to meteoric or mixing zone waters, creating the diagenetic, collapse breccia facies. The anhydrite associated with the breccia also indicates surface exposure. The second sequence begins with algal boundstones that sharply overlie the collapse breccia facies of the previous sequence. These boundstones grade upward into high-energy, cross-bedded ooid beach ( ) and oncoidal, peloidal beach shoreface deposits. Proximity of the overlying Buckner anhydrite, representing a probable sabkha system, favors a beach or a very nearshore shoal interpretation for the ooid grainstones. The ooid grainstone facies, which is the primary producing interval, has measured porosity values ranging from 5.3% to 17.8% and averaging 11.0%. Measured permeability values range from 0.04 md to 701 md and average 161.63 md. These high porosity and permeability values result from abundant primary intergranular pore space, as well as secondary pore space created by dolomitization and dissolution of framework grains.

17. Physiology and its Importance for Reference Intervals

PubMed Central

Sikaris, Kenneth A

2014-01-01

Reference intervals are ideally defined on apparently healthy individuals and should be distinguished from clinical decision limits that are derived from known diseased patients. Knowledge of physiological changes is a prerequisite for understanding and developing reference intervals. Reference intervals may differ for various subpopulations because of differences in their physiology, most obviously between men and women, but also in childhood, pregnancy and the elderly. Changes in laboratory measurements may be due to various physiological factors starting at birth including weaning, the active toddler, immunological learning, puberty, pregnancy, menopause and ageing. The need to partition reference intervals is required when there are significant physiological changes that need to be recognised. It is important that laboratorians are aware of these changes otherwise reference intervals that attempt to cover a widened inter-individual variability may lose their usefulness. It is virtually impossible for any laboratory to directly develop reference intervals for each of the physiological changes that are currently known, however indirect techniques can be used to develop or validate reference intervals in some difficult situations such as those for children. Physiology describes our life’s journey, and it is only when we are familiar with that journey that we can appreciate a pathological departure. PMID:24659833

18. Analytic approximate radiation effects due to Bremsstrahlung

SciTech Connect

Ben-Zvi I.

2012-02-01

The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

19. Interval Estimates of Multivariate Effect Sizes: Coverage and Interval Width Estimates under Variance Heterogeneity and Nonnormality

ERIC Educational Resources Information Center

Hess, Melinda R.; Hogarty, Kristine Y.; Ferron, John M.; Kromrey, Jeffrey D.

2007-01-01

Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D[superscript 2]), two bias-adjusted estimates of D[superscript 2], and Huberty's…

20. Simple table for estimating confidence interval of discrepancy frequencies in microbiological safety evaluation.

PubMed

Lamy, Brigitte; Delignette-Muller, Marie Laure; Baty, Florent; Carret, Gerard

2004-01-01

We provide a simple tool to determine discrepancies confidence interval (CI) in microbiology validation studies such as technical accuracy of a qualitative test result. This tool enables to determine exact confidence interval (binomial CI) from an observed frequency when normal approximation is inadequate, that is, in case of rare events. This tool has daily applications in microbiology and we are presenting an example of its application to antimicrobial susceptibility systems evaluation. PMID:14706759

1. Microsatellite Instability Status of Interval Colorectal Cancers in a Korean Population

PubMed Central

Lee, Kil Woo; Park, Soo-Kyung; Yang, Hyo-Joon; Jung, Yoon Suk; Choi, Kyu Yong; Kim, Kyung Eun; Jung, Kyung Uk; Kim, Hyung Ook; Kim, Hungdai; Chun, Ho-Kyung; Park, Dong Il

2016-01-01

Background/Aims A subset of patients may develop colorectal cancer after a colonoscopy that is negative for malignancy. These missed or de novo lesions are referred to as interval cancers. The aim of this study was to determine whether interval colon cancers are more likely to result from the loss of function of mismatch repair genes than sporadic cancers and to demonstrate microsatellite instability (MSI). Methods Interval cancer was defined as a cancer that was diagnosed within 5 years of a negative colonoscopy. Among the patients who underwent an operation for colorectal cancer from January 2013 to December 2014, archived cancer specimens were evaluated for MSI by sequencing microsatellite loci. Results Of the 286 colon cancers diagnosed during the study period, 25 (8.7%) represented interval cancer. MSI was found in eight of the 25 patients (32%) that presented interval cancers compared with 22 of the 261 patients (8.4%) that presented sporadic cancers (p=0.002). In the multivariable logistic regression model, MSI was associated with interval cancer (OR, 3.91; 95% confidence interval, 1.38 to 11.05). Conclusions Interval cancers were approximately four times more likely to show high MSI than sporadic cancers. Our findings indicate that certain interval cancers may occur because of distinct biological features. PMID:27114419

2. Importance of QT interval in clinical practice.

PubMed

Ambhore, Anand; Teo, Swee-Guan; Bin Omar, Abdul Razakjr; Poh, Kian-Keong

2014-12-01

Long QT interval is an important finding that is often missed by electrocardiogram interpreters. Long QT syndrome (inherited and acquired) is a potentially lethal cardiac channelopathy that is frequently mistaken for epilepsy. We present a case of long QT syndrome with multiple cardiac arrests presenting as syncope and seizures. The long QTc interval was aggravated by hypomagnesaemia and drugs, including clarithromycin and levofloxacin. Multiple drugs can cause prolongation of the QT interval, and all physicians should bear this in mind when prescribing these drugs. PMID:25630313

3. Short Interval Leaf Movements of Cotton 12

PubMed Central

Miller, Charles S.

1975-01-01

Gossypium hirsutum L. cv. Lankart plants exhibited three different types of independent short interval leaf movements which were superimposed on the circadian movements. The different types were termed SIRV (short interval rhythmical vertical), SIHM (short interval horizontal movements), and SHAKE (short stroked SIRV). The 36-minute period SIRV movements occurred at higher moisture levels. The 176-minute period SIHM occurred at lower moisture levels and ceased as the stress increased. The SHAKE movements were initiated with further stresses. The SLEEP (circadian, diurnal) movements ceased with further stress. The last to cease just prior to permanent wilting were the SHAKE movements. PMID:16659123

4. [Bond selective chemistry beyond the adiabatic approximation

SciTech Connect

Butler, L.J.

1993-02-28

The adiabatic Born-Oppenheimer potential energy surface approximation is not valid for reaction of a wide variety of energetic materials and organic fuels; coupling between electronic states of reacting species plays a key role in determining the selectivity of the chemical reactions induced. This research program initially studies this coupling in (1) selective C-Br bond fission in 1,3- bromoiodopropane, (2) C-S:S-H bond fission branching in CH[sub 3]SH, and (3) competition between bond fission channels and H[sub 2] elimination in CH[sub 3]NH[sub 2].

5. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

PubMed Central

Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

2014-01-01

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

6. Inexact rough-interval two-stage stochastic programming for conjunctive water allocation problems.

PubMed

Lu, Hongwei; Huang, Guohe; He, Li

2009-10-01

An inexact rough-interval two-stage stochastic programming (IRTSP) method is developed for conjunctive water allocation problems. Rough intervals (RIs), as a particular case of rough sets, are introduced into the modeling framework to tackle dual-layer information provided by decision makers. Through embeding upper and lower approximation intervals, rough intervals are capable of reflecting complex parameters with the most reliable and possible variation ranges being identified. An interactive solution method is also derived. A conjunctive water-allocation system is then structured for characterizing the proposed model. Solutions indicate a detailed optimal allocation scheme with a rough-interval form; a total of [[1048.83, 2078.29]:[1482.26, 2020.60

7. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

PubMed

Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

2014-01-01

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

8. Improved non-approximability results

SciTech Connect

Bellare, M.; Sudan, M.

1994-12-31

We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

9. Quantum tunneling beyond semiclassical approximation

Banerjee, Rabin; Ranjan Majhi, Bibhas

2008-06-01

Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

10. The structural physical approximation conjecture

Shultz, Fred

2016-01-01

It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

11. Intact Interval Timing in Circadian CLOCK Mutants

PubMed Central

Cordes, Sara; Gallistel, C. R.

2008-01-01

While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

12. Calibration intervals at Bendix Kansas City

SciTech Connect

James, R.T.

1980-01-01

The calibration interval evaluation methods and control in each calibrating department of the Bendix Corp., Kansas City Division is described, and a more detailed description of those employed in metrology is provided.

13. Combination of structural reliability and interval analysis

Qiu, Zhiping; Yang, Di; Elishakoff, Isaac

2008-02-01

In engineering applications, probabilistic reliability theory appears to be presently the most important method, however, in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs. In this paper, we developed a hybrid of probabilistic and non-probabilistic reliability theory, which describes the structural uncertain parameters as interval variables when statistical data are found insufficient. By using the interval analysis, a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper, and the traditional probabilistic theory is incorporated with the interval analysis. Moreover, the new method preserves the useful part of the traditional probabilistic reliability theory, but removes the restriction of its strict requirement on data acquisition. Example is presented to demonstrate the feasibility and validity of the proposed theory.

14. Almost primes in almost all short intervals

TERÄVÄINEN, JONI

2016-09-01

Let $E_k$ be the set of positive integers having exactly $k$ prime factors. We show that almost all intervals $[x,x+\\log^{1+\\varepsilon} x]$ contain $E_3$ numbers, and almost all intervals $[x,x+\\log^{3.51} x]$ contain $E_2$ numbers. By this we mean that there are only $o(X)$ integers $1\\leq x\\leq X$ for which the mentioned intervals do not contain such numbers. The result for $E_3$ numbers is optimal up to the $\\varepsilon$ in the exponent. The theorem on $E_2$ numbers improves a result of Harman, which had the exponent $7+\\varepsilon$ in place of $3.51$. We will also consider general $E_k$ numbers, and find them on intervals whose lengths approach $\\log x$ as $k\\to \\infty$.

15. Generalized Quasilinear Approximation: Application to Zonal Jets

Marston, J. B.; Chini, G. P.; Tobias, S. M.

2016-05-01

Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems.

16. Wavelet Approximation in Data Assimilation

NASA Technical Reports Server (NTRS)

Tangborn, Andrew; Atlas, Robert (Technical Monitor)

2002-01-01

Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

17. Plasma Physics Approximations in Ares

SciTech Connect

Managan, R. A.

2015-01-08

Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

18. Multiple parton scattering in nuclei: Beyond helicity amplitude approximation

SciTech Connect

Zhang, Ben-Wei; Wang, Xin-Nian

2003-01-21

Multiple parton scattering and induced parton energy loss in deeply inelastic scattering (DIS) off heavy nuclei is studied within the framework of generalized factorization in perturbative QCD with a complete calculation beyond the helicity amplitude (or soft bremsstrahlung) approximation. Such a calculation gives rise to new corrections to the modified quark fragmentation functions. The effective parton energy loss is found to be reduced by a factor of 5/6 from the result of helicity amplitude approximation.

19. Finite sampling corrected 3D noise with confidence intervals.

PubMed

Haefner, David P; Burks, Stephen D

2015-05-20

When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density for noise in imaging systems known as 3D noise. The goal was to decompose the 3D noise process into spatial and temporal components identify potential sources of origin. To characterize a sensor in terms of its 3D noise values, a finite number of samples in each of the three dimensions (two spatial, one temporal) were performed. In this correspondence, we developed the full sampling corrected 3D noise measurement and the corresponding confidence bounds. The accuracy of these methods was demonstrated through Monte Carlo simulations. Both the sampling correction as well as the confidence intervals can be applied a posteriori to the classic 3D noise calculation. The Matlab functions associated with this work can be found on the Mathworks file exchange ["Finite sampling corrected 3D noise with confidence intervals," https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.]. PMID:26192530

20. Robust inter-beat interval estimation in cardiac vibration signals.

PubMed

Brüser, C; Winter, S; Leonhardt, S

2013-02-01

Reliable and accurate estimation of instantaneous frequencies of physiological rhythms, such as heart rate, is critical for many healthcare applications. Robust estimation is especially challenging when novel unobtrusive sensors are used for continuous health monitoring in uncontrolled environments, because these sensors can create significant amounts of potentially unreliable data. We propose a new flexible algorithm for the robust estimation of local (beat-to-beat) intervals from cardiac vibration signals, specifically ballistocardiograms (BCGs), recorded by an unobtrusive bed-mounted sensor. This sensor allows the measurement of motions of the body which are caused by cardiac activity. Our method requires neither a training phase nor any prior knowledge about the morphology of the heart beats in the analyzed waveforms. Instead, three short-time estimators are combined using a Bayesian approach to continuously estimate the inter-beat intervals. We have validated our method on over-night BCG recordings from 33 subjects (8 normal, 25 insomniacs). On this dataset, containing approximately one million heart beats, our method achieved a mean beat-to-beat interval error of 0.78% with a coverage of 72.69%. PMID:23343518

1. Visual feedback for retuning to just intonation intervals

Ayers, R. Dean; Nordquist, Peter R.; Corn, Justin S.

2005-04-01

Musicians become used to equal temperament pitch intervals due to their widespread use in tuning pianos and other fixed-pitch instruments. For unaccompanied singing and some other performance situations, a more harmonious blending of sounds can be achieved by shifting to just intonation intervals. Lissajous figures provide immediate and striking visual feedback that emphasizes the frequency ratios and pitch intervals found among the first few members of a single harmonic series. Spirograph patterns (hypotrochoids) are also especially simple for ratios of small whole numbers, and their use for providing feedback to singers has been suggested previously [G. W. Barton, Jr., Am. J. Phys. 44(6), 593-594 (1976)]. A hybrid mixture of these methods for comparing two frequencies generates what appears to be a three dimensional Lissajous figure-a cylindrical wire mesh that rotates about its tilted vertical axis, with zero tilt yielding the familiar Lissajous figure. Sine wave inputs work best, but the sounds of flute, recorder, whistling, and a sung `oo'' are good enough approximations to work well. This initial study compares the three modes of presentation in terms of the ease with which a singer can obtain a desired pattern and recognize its shape.

2. Self-Consistent Random Phase Approximation

Rohr, Daniel; Hellgren, Maria; Gross, E. K. U.

2012-02-01

We report self-consistent Random Phase Approximation (RPA) calculations within the Density Functional Theory. The calculations are performed by the direct minimization scheme for the optimized effective potential method developed by Yang et al. [1]. We show results for the dissociation curve of H2^+, H2 and LiH with the RPA, where the exchange correlation kernel has been set to zero. For H2^+ and H2 we also show results for RPAX, where the exact exchange kernel has been included. The RPA, in general, over-correlates. At intermediate distances a maximum is obtained that lies above the exact energy. This is known from non-self-consistent calculations and is still present in the self-consistent results. The RPAX energies are higher than the RPA energies. At equilibrium distance they accurately reproduce the exact total energy. In the dissociation limit they improve upon RPA, but are still too low. For H2^+ the RPAX correlation energy is zero. Consequently, RPAX gives the exact dissociation curve. We also present the local potentials. They indicate that a peak at the bond midpoint builds up with increasing bond distance. This is expected for the exact KS potential.[4pt] [1] W. Yang, and Q. Wu, Phys. Rev. Lett., 89, 143002 (2002)

3. Determination of short-term error caused by the reference clock in precision time-interval measurement and generation

Kalisz, Jozef

1988-06-01

A simple analysis based on the randomized clock cycle T(o) yields a useful formula on its variance in terms of the Allan variance. The short-term uncertainty of the measured or generated time interval t is expressed by the standard deviation in an approximate form as a function of the Allen variance. The estimates obtained are useful for determining the measurement uncertainty of time intervals within the approximate range of 10 ms-100 s.

4. Heat flow in the postquasistatic approximation

SciTech Connect

Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

2010-08-15

We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

5. Bilayer graphene spectral function in the random phase approximation and self-consistent GW approximation

Sabashvili, Andro; Östlund, Stellan; Granath, Mats

2013-08-01

We calculate the single-particle spectral function for doped bilayer graphene in the low energy limit, described by two parabolic bands with zero band gap and long range Coulomb interaction. Calculations are done using thermal Green's functions in both the random phase approximation (RPA) and the fully self-consistent GW approximation. Consistent with previous studies RPA yields a spectral function which, apart from the Landau quasiparticle peaks, shows additional coherent features interpreted as plasmarons, i.e., composite electron-plasmon excitations. In the GW approximation the plasmaron becomes incoherent and peaks are replaced by much broader features. The deviation of the quasiparticle weight and mass renormalization from their noninteracting values is small which indicates that bilayer graphene is a weakly interacting system. The electron energy loss function, Im[-ɛq-1(ω)] shows a sharp plasmon mode in RPA which in the GW approximation becomes less coherent and thus consistent with the weaker plasmaron features in the corresponding single-particle spectral function.

6. Probability Distribution for Flowing Interval Spacing

SciTech Connect

S. Kuzio

2004-09-22

Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

7. Analytical Approximation of Spectrum for Pulse X-ray Tubes

Vavilov, S.; Koshkin, G.; Udod, V.; Fofanof, O.

2016-01-01

Among the main characteristics of the pulsed X-ray apparatuses the spectral energy characteristics are the most important ones: the spectral distribution of the photon energy, effective and maximum energy of quanta. Knowing the spectral characteristics of the radiation of pulse sources is very important for the practical use of them in non-destructive testing. We have attempted on the analytical approximation of the pulsed X-ray apparatuses spectra obtained in the different experimental papers. The results of the analytical approximation of energy spectrum for pulse X-ray tube are presented. Obtained formulas are adequate to experimental data and can be used by designing pulsed X-ray apparatuses.

8. Interplay of approximate planning strategies.

PubMed

Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

2015-03-10

Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

9. Approximating metal-insulator transitions

Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

2015-12-01

We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

10. Strong shock implosion, approximate solution

Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.

1983-01-01

The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.

11. Communication: Improved pair approximations in local coupled-cluster methods

SciTech Connect

Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

2015-03-28

In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

12. Approximate analytic solutions to the NPDD: Short exposure approximations

Close, Ciara E.; Sheridan, John T.

2014-04-01

There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

13. Approximate penetration factors for nuclear reactions of astrophysical interest

NASA Technical Reports Server (NTRS)

Humblet, J.; Fowler, W. A.; Zimmerman, B. A.

1987-01-01

The ranges of validity of approximations of P(l), the penetration factor which appears in the parameterization of nuclear-reaction cross sections at low energies and is employed in the extrapolation of laboratory data to even lower energies of astrophysical interest, are investigated analytically. Consideration is given to the WKB approximation, P(l) at the energy of the total barrier, approximations derived from the asymptotic expansion of G(l) for large eta, approximations for small values of the parameter x, applications of P(l) to nuclear reactions, and the dependence of P(l) on channel radius. Numerical results are presented in tables and graphs, and parameter ranges where the danger of serious errors is high are identified.

14. Low rank approximation in G 0 W 0 calculations

Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.

2016-08-01

The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.

15. Sunspot Time Series: Passive and Active Intervals

Zięba, S.; Nieckarz, Z.

2014-07-01

Solar activity slowly and irregularly decreases from the first spotless day (FSD) in the declining phase of the old sunspot cycle and systematically, but also in an irregular way, increases to the new cycle maximum after the last spotless day (LSD). The time interval between the first and the last spotless day can be called the passive interval (PI), while the time interval from the last spotless day to the first one after the new cycle maximum is the related active interval (AI). Minima of solar cycles are inside PIs, while maxima are inside AIs. In this article, we study the properties of passive and active intervals to determine the relation between them. We have found that some properties of PIs, and related AIs, differ significantly between two group of solar cycles; this has allowed us to classify Cycles 8 - 15 as passive cycles, and Cycles 17 - 23 as active ones. We conclude that the solar activity in the PI declining phase (a descending phase of the previous cycle) determines the strength of the approaching maximum in the case of active cycles, while the activity of the PI rising phase (a phase of the ongoing cycle early growth) determines the strength of passive cycles. This can have implications for solar dynamo models. Our approach indicates the important role of solar activity during the declining and the rising phases of the solar-cycle minimum.

16. Natural frequencies of structures with interval parameters

Sofi, A.; Muscolino, G.; Elishakoff, I.

2015-07-01

This paper deals with the evaluation of the lower and upper bounds of the natural frequencies of structures with uncertain-but-bounded parameters. The solution of the generalized interval eigenvalue problem is pursued by taking into account the actual variability and dependencies of uncertain structural parameters affecting the mass and stiffness matrices. To this aim, interval uncertainties are handled by applying the improved interval analysis via extra unitary interval (EUI), recently introduced by the first two authors. By associating an EUI to each uncertain-but-bounded parameter, the cases of mass and stiffness matrices affected by fully disjoint, completely or partially coincident uncertainties are considered. Then, based on sensitivity analysis, it is shown that the bounds of the interval eigenvalues can be evaluated as solution of two appropriate deterministic eigenvalue problems without requiring any combinatorial procedure. If the eigenvalues are monotonic functions of the uncertain parameters, then the exact bounds are obtained. The accuracy of the proposed method is demonstrated by numerical results concerning truss and beam structures with material and/or geometrical uncertainties.

17. Function approximation in inhibitory networks.

PubMed

Tripp, Bryan; Eliasmith, Chris

2016-05-01

In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256

18. Interplay of approximate planning strategies

PubMed Central

Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

2015-01-01

Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

19. Multidimensional stochastic approximation Monte Carlo.

PubMed

Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

2016-06-01

Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

20. Decision analysis with approximate probabilities

NASA Technical Reports Server (NTRS)

Whalen, Thomas

1992-01-01

This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

1. Multidimensional stochastic approximation Monte Carlo

Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

2016-06-01

Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

2. Pigeons' choices between fixed-interval and random-interval schedules: utility of variability?

PubMed

Andrzejewski, Matthew E; Cardinal, Claudia D; Field, Douglas P; Flannery, Barbara A; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N

2005-03-01

Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the fixed-interval schedule. Thus the programmed delays to reinforcement on the random alternative were never shorter and were often longer than the fixed interval. Despite this feature, the fixed schedule was not strongly preferred. Increases in the probability used to generate the random interval resulted in decreased preferences for the fixed schedule. In addition, the number of consecutive choices on the preferred alternative varied directly with preference, whereas the consecutive number of choices on the nonpreferred alternative was fairly constant. The probability of choosing the random alternative was unaffected by the immediately prior interval encountered on that schedule, even when it was very long relative to the average value. The results loosely support conceptions of a "preference for variability" from foraging theory and the "utility of behavioral variability" from human decision-making literatures. PMID:15828591

3. Perceptual interference decays over short unfilled intervals.

PubMed

Schulkind, M D

2000-09-01

The perceptual interference effect refers to the fact that object identification is directly related to the amount of information available at initial exposure. The present article investigated whether perceptual interference would dissipate when a short, unfilled interval was introduced between exposures to a degraded object. Across three experiments using both musical and pictorial stimuli, identification performance increased directly with the length of the unfilled interval. Consequently, significant perceptual interference was obtained only when the interval between exposures was relatively short (< 500 msec for melodies; < 300 msec for pictures). These results are consistent with explanations that attribute perceptual interference to increased perceptual noise created by exposures to highly degraded objects. The data also suggest that perceptual interference is mediated by systems that are not consciously controlled by the subject and that perceptual interference in the visual domain decays more rapidly than perceptual interference in the auditory domain. PMID:11105520

4. Children's artistic responses to musical intervals.

PubMed

Smith, L D; Williams, R N

1999-01-01

In one experiment, White South African boys drew pictures in response to four musical intervals. In the second, the subjects were of both sexes and drawn from White, urban Black, and rural Black populations. Six intervals were used. Drawing content was similar cross-culturally. Consonances were perceived as generally positive; dissonances, generally negative. There was also an activity dimension. Children in a lower grade drew more concrete pictures than did those in a higher grade, regardless of age. Even young listeners were fairly consistent in their responses. This suggests that perception of musical meaning is a universal rather than culturally based phenomenon. PMID:10696271

5. Optimal Colonoscopy Surveillance Interval after Polypectomy

PubMed Central

Kim, Tae Oh

2016-01-01

The detection and removal of adenomatous polyps and postpolypectomy surveillance are considered important for the control of colorectal cancer (CRC). Surveillance using colonoscopy is an effective tool for preventing CRC after colorectal polypectomy, especially if compliance is good. In current practice, the intervals between colonoscopies after polypectomy are variable. Different recommendations for recognizing at risk groups and defining surveillance intervals after an initial finding of colorectal adenomas have been published. However, high-grade dysplasia and the number and size of adenomas are known major cancer predictors. Based on this, a subgroup of patients that may benefit from intensive surveillance colonoscopy can be identified. PMID:27484812

6. Magnus approximation in neutrino oscillations

Acero, Mario A.; Aguilar-Arevalo, Alexis A.; D'Olivo, J. C.

2011-04-01

Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.

7. Born approximation, scattering, and algorithm

Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

2015-05-01

In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

8. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas fault.

PubMed

Shelly, David R

2010-06-11

Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between approximately 3 and approximately 6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes. PMID:20538948

9. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

ERIC Educational Resources Information Center

Doebler, Anna; Doebler, Philipp; Holling, Heinz

2013-01-01

The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

10. On the Effective Construction of Compactly Supported Wavelets Satisfying Homogenous Boundary Conditions on the Interval

NASA Technical Reports Server (NTRS)

Chiavassa, G.; Liandrat, J.

1996-01-01

We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.

11. ADEQUACY OF CONFIDENCE INTERVAL ESTIMATES OF YIELD RESPONSES TO OZONE ESTIMATED FROM NCLAN DATA

EPA Science Inventory

Three methods of estimating confidence intervals for the parameters of Weibull nonlinear models are examined. hese methods are based on linear approximation theory (Wald), the likelihood ratio test, and Clarke's (1987) procedures. nalyses are based on Weibull dose-response equati...

12. The effect of inter-set rest intervals on resistance exercise-induced muscle hypertrophy.

PubMed

2014-12-01

Due to a scarcity of longitudinal trials directly measuring changes in muscle girth, previous recommendations for inter-set rest intervals in resistance training programs designed to stimulate muscular hypertrophy were primarily based on the post-exercise endocrinological response and other mechanisms theoretically related to muscle growth. New research regarding the effects of inter-set rest interval manipulation on resistance training-induced muscular hypertrophy is reviewed here to evaluate current practices and provide directions for future research. Of the studies measuring long-term muscle hypertrophy in groups employing different rest intervals, none have found superior muscle growth in the shorter compared with the longer rest interval group and one study has found the opposite. Rest intervals less than 1 minute can result in acute increases in serum growth hormone levels and these rest intervals also decrease the serum testosterone to cortisol ratio. Long-term adaptations may abate the post-exercise endocrinological response and the relationship between the transient change in hormonal production and chronic muscular hypertrophy is highly contentious and appears to be weak. The relationship between the rest interval-mediated effect on immune system response, muscle damage, metabolic stress, or energy production capacity and muscle hypertrophy is still ambiguous and largely theoretical. In conclusion, the literature does not support the hypothesis that training for muscle hypertrophy requires shorter rest intervals than training for strength development or that predetermined rest intervals are preferable to auto-regulated rest periods in this regard. PMID:25047853

13. Approximate Techniques for Representing Nuclear Data Uncertainties

SciTech Connect

2007-01-01

Computational tools are available to utilize sensitivity and uncertainty (S/U) methods for a wide variety of applications in reactor analysis and criticality safety. S/U analysis generally requires knowledge of the underlying uncertainties in evaluated nuclear data, as expressed by covariance matrices; however, only a few nuclides currently have covariance information available in ENDF/B-VII. Recently new covariance evaluations have become available for several important nuclides, but a complete set of uncertainties for all materials needed in nuclear applications is unlikely to be available for several years at least. Therefore if the potential power of S/U techniques is to be realized for near-term projects in advanced reactor design and criticality safety analysis, it is necessary to establish procedures for generating approximate covariance data. This paper discusses an approach to create applications-oriented covariance data by applying integral uncertainties to differential data within the corresponding energy range.

14. A Gradient Descent Approximation for Graph Cuts

Yildiz, Alparslan; Akgul, Yusuf Sinan

Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large scale problems. In this paper, we introduce a novel method for the approximation of the graph cut optimization by posing the problem as a gradient descent formulation. The advantages of our method is the ability to work efficiently on large problems and the possibility of convenient implementation on parallel architectures such as inexpensive Graphics Processing Units (GPUs). We have implemented the proposed method on the Nvidia 8800GTS GPU. The classical segmentation experiments on static images and video data showed the effectiveness of our method.

15. Improved effective vector boson approximation revisited

Bernreuther, Werner; Chen, Long

2016-03-01

We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.

16. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

Technology Transfer Automated Retrieval System (TEKTRAN)

Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

17. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

ERIC Educational Resources Information Center

2013-01-01

The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

18. Interval scanning photomicrography of microbial cell populations.

NASA Technical Reports Server (NTRS)

Casida, L. E., Jr.

1972-01-01

A single reproducible area of the preparation in a fixed focal plane is photographically scanned at intervals during incubation. The procedure can be used for evaluating the aerobic or anaerobic growth of many microbial cells simultaneously within a population. In addition, the microscope is not restricted to the viewing of any one microculture preparation, since the slide cultures are incubated separately from the microscope.

19. Duration perception in crossmodally-defined intervals.

PubMed

Mayer, Katja M; Di Luca, Massimiliano; Ernst, Marc O

2014-03-01

How humans perform duration judgments with multisensory stimuli is an ongoing debate. Here, we investigated how sub-second duration judgments are achieved by asking participants to compare the duration of a continuous sound to the duration of an empty interval in which onset and offset were marked by signals of different modalities using all combinations of visual, auditory and tactile stimuli. The pattern of perceived durations across five stimulus durations (ranging from 100 ms to 900 ms) follows the Vierordt Law. Furthermore, intervals with a sound as onset (audio-visual, audio-tactile) are perceived longer than intervals with a sound as offset. No modality ordering effect is found for visualtactile intervals. To infer whether a single modality-independent or multiple modality-dependent time-keeping mechanisms exist we tested whether perceived duration follows a summative or a multiplicative distortion pattern by fitting a model to all modality combinations and durations. The results confirm that perceived duration depends on sensory latency (summative distortion). Instead, we did not find evidence for multiplicative distortions. The results of the model and the behavioural data support the concept of a single time-keeping mechanism that allows for judgments of durations marked by multisensory stimuli. PMID:23953664

20. MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION

EPA Science Inventory

Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...

Milovanovic, Gradimir V.; Cvetkovic, Aleksandar S.

2005-10-01

In this paper we prove the existence and uniqueness of the Gaussian interval quadrature formula with respect to the generalized Laguerre weight function. An algorithm for numerical construction has also investigated and some suitable solutions are proposed. A few numerical examples are included.

2. Happiness Scale Interval Study. Methodological Considerations

ERIC Educational Resources Information Center

Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

2011-01-01

The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…

3. Precise Interval Timer for Software Defined Radio

NASA Technical Reports Server (NTRS)

Pozhidaev, Aleksey (Inventor)

2014-01-01

A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

4. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

ERIC Educational Resources Information Center

Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

2012-01-01

Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

5. Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration

USGS Publications Warehouse

Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.

2009-01-01

This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.

6. Generalized local-density approximation for spherical potentials

SciTech Connect

Zhang, X.; Nicholson, D.M.

1999-08-01

An alternative density functional for the spherical approximation of cell potentials is formulated. It relies on overlapping atomic spheres for the calculation of the kinetic energy, similar to the atomic sphere approximation (ASA), however, a shape correction is used that has the same form as the interstitial treatment in the nonoverlapping muffin-tin (MT) approach. The intersite Coulomb energy is evaluated using the Madelung energy as computed in the MT approach, while the on-site Coulomb energy is calculated using the ASA. The Kohn-Sham equations for the functional are then solved self-consistently. The ASA is known to give poor elastic constants and good point defect energies. Conversely the MT approach gives good elastic constants and poor point defect energies. The proposed new functional maintains the simplicity of the spherical potentials found in the ASA and MT approaches, but gives good values for both elastic constants and point defects. This solution avoids a problem, absent in the ASA but suffered by the MT approximation, of incorrect distribution of site charges when charge transfer is large. Relaxation of atomic positions is thus facilitated. Calculations confirm that the approach gives similar elastic constants to the MT approximation, and defect formation energies similar to those obtained with ASA. {copyright} {ital 1999} {ital The American Physical Society}

7. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

ERIC Educational Resources Information Center

Raykov, Tenko; Penev, Spiridon

2006-01-01

Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

8. An application of distributed approximating functional-wavelets to reactive scattering

SciTech Connect

Wei, G.W.; Althorpe, S.C.; Kouri, D.J.; Hoffman, D.K.

1998-05-01

A newly developed distributed approximating functional (DAF)-wavelet, the Dirichlet{endash}Gabor DAF-wavelet (DGDW), is applied in a calculation of the state-to-state reaction probabilities for the three-dimensional (3-D) (J=0)H+H{sub 2} reaction, using the time-independent wave-packet reactant-product decoupling (TIWRPD) method. The DGDWs are reconstructed from a rigorous mathematical sampling theorem, and are shown to be DAF-wavelet generalizations of both the sine discrete variable representation (sinc-DVR) and the Fourier distributed approximating functionals (DAFs). An important feature of the generalized sinc-DVR representation is that the grid points are distributed at equally spaced intervals and the kinetic energy matrix has a banded, Toeplitz structure. Test calculations show that, in accordance with mathematical sampling theory, the DAF-windowed sinc-DVR converges much more rapidly and to higher accuracy with bandwidth, 2W+1. The results of the H+H{sub 2} calculation are in very close agreement with the results of previous TIWRPD calculations, demonstrating that the DGDW representation is an accurate and efficient representation for use in FFT wave-packet propagation methods, and that, more generally, the theory of wavelets and related techniques have great potential for the study of molecular dynamics. {copyright} {ital 1998 American Institute of Physics.}

9. An application of distributed approximating functional-wavelets to reactive scattering

Wei, G. W.; Althorpe, S. C.; Kouri, D. J.; Hoffman, D. K.

1998-05-01

A newly developed distributed approximating functional (DAF)-wavelet, the Dirichlet-Gabor DAF-wavelet (DGDW), is applied in a calculation of the state-to-state reaction probabilities for the three-dimensional (3-D) (J=0)H+H2 reaction, using the time-independent wave-packet reactant-product decoupling (TIWRPD) method. The DGDWs are reconstructed from a rigorous mathematical sampling theorem, and are shown to be DAF-wavelet generalizations of both the sine discrete variable representation (sinc-DVR) and the Fourier distributed approximating functionals (DAFs). An important feature of the generalized sinc-DVR representation is that the grid points are distributed at equally spaced intervals and the kinetic energy matrix has a banded, Toeplitz structure. Test calculations show that, in accordance with mathematical sampling theory, the DAF-windowed sinc-DVR converges much more rapidly and to higher accuracy with bandwidth, 2W+1. The results of the H+H2 calculation are in very close agreement with the results of previous TIWRPD calculations, demonstrating that the DGDW representation is an accurate and efficient representation for use in FFT wave-packet propagation methods, and that, more generally, the theory of wavelets and related techniques have great potential for the study of molecular dynamics.

10. Assessment of Interval Data and Their Potential Application to Residential Electricity End-Use Modeling, An

EIA Publications

2015-01-01

The Energy Information Administration (EIA) is investigating the potential benefits of incorporating interval electricity data into its residential energy end use models. This includes interval smart meter and submeter data from utility assets and systems. It is expected that these data will play a significant role in informing residential energy efficiency policies in the future. Therefore, a long-term strategy for improving the RECS end-use models will not be complete without an investigation of the current state of affairs of submeter data, including their potential for use in the context of residential building energy modeling.

11. Producing approximate answers to database queries

NASA Technical Reports Server (NTRS)

Vrbsky, Susan V.; Liu, Jane W. S.

1993-01-01

We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

12. Approximate Quantum Cloaking and Almost-Trapped States

SciTech Connect

Greenleaf, Allan; Kurylev, Yaroslav; Lassas, Matti; Uhlmann, Gunther

2008-11-28

We describe potentials which act as approximate cloaks for matter waves. These potentials are derived from ideal cloaks for the conductivity and Helmholtz equations. At most energies E, if a potential is surrounded by an approximate cloak, then it becomes almost undetectable and unaltered by matter waves originating externally to the cloak. For certain E, however, the approximate cloaks are resonant, supporting wave functions almost trapped inside the cloaked region and negligible outside. Applications include dc or magnetically tunable ion traps and beam switches.

13. Interval Throwing and Hitting Programs in Baseball: Biomechanics and Rehabilitation.

PubMed

Chang, Edward S; Bishop, Meghan E; Baker, Dylan; West, Robin V

2016-01-01

Baseball injuries from throwing and hitting generally occur as a consequence of the repetitive and high-energy motions inherent to the sport. Biomechanical studies have contributed to understanding the pathomechanics leading to injury and to the development of rehabilitation programs. Interval-based throwing and hitting programs are designed to return an athlete to competition through a gradual progression of sport-specific exercises. Proper warm-up and strict adherence to the program allows the athlete to return as quickly and safely as possible. PMID:26991569

14. The Rotator Interval of the Shoulder

PubMed Central

Frank, Rachel M.; Taylor, Dean; Verma, Nikhil N.; Romeo, Anthony A.; Mologne, Timothy S.; Provencher, Matthew T.

2015-01-01

Biomechanical studies have shown that repair or plication of rotator interval (RI) ligamentous and capsular structures decreases glenohumeral joint laxity in various directions. Clinical outcomes studies have reported successful outcomes after repair or plication of these structures in patients undergoing shoulder stabilization procedures. Recent studies describing arthroscopic techniques to address these structures have intensified the debate over the potential benefit of these procedures as well as highlighted the differences between open and arthroscopic RI procedures. The purposes of this study were to review the structures of the RI and their contribution to shoulder instability, to discuss the biomechanical and clinical effects of repair or plication of rotator interval structures, and to describe the various surgical techniques used for these procedures and outcomes. PMID:26779554

15. Constraint-based Attribute and Interval Planning

NASA Technical Reports Server (NTRS)

Jonsson, Ari; Frank, Jeremy

2013-01-01

In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

16. Efficient computation of parameter confidence intervals

NASA Technical Reports Server (NTRS)

Murphy, Patrick C.

1987-01-01

An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

17. A consistent collinear triad approximation for operational wave models

Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

2016-08-01

In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

18. One-way ANOVA based on interval information

Hesamian, Gholamreza

2016-08-01

This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

19. Systolic Time Intervals and New Measurement Methods.

PubMed

Tavakolian, Kouhyar

2016-06-01

Systolic time intervals have been used to detect and quantify the directional changes of left ventricular function. New methods of recording these cardiac timings, which are less cumbersome, have been recently developed and this has created a renewed interest and novel applications for these cardiac timings. This manuscript reviews these new methods and addresses the potential for the application of these cardiac timings for the diagnosis and prognosis of different cardiac diseases. PMID:27048269

20. Quantifying chaotic dynamics from interspike intervals

Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Shihalov, G. M.

2015-03-01

We address the problem of characterization of chaotic dynamics at the input of a threshold device described by an integrate-and-fire (IF) or a threshold crossing (TC) model from the output sequences of interspike intervals (ISIs). We consider the conditions under which quite short sequences of spiking events provide correct identification of the dynamical regime characterized by the single positive Lyapunov exponent (LE). We discuss features of detecting the second LE for both types of the considered models of events generation.

1. Fluctuations of healthy and unhealthy heartbeat intervals

Lan, Boon Leong; Toda, Mikito

2013-04-01

We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

2. New Madrid seismic zone recurrence intervals

SciTech Connect

Schweig, E.S. Center for Earthquake Research and Information, Memphis, TN ); Ellis, M.A. )

1993-03-01

Frequency-magnitude relations in the New Madrid seismic zone suggest that great earthquakes should occur every 700--1,200 yrs, implying relatively high strain rates. These estimates are supported by some geological and GPS results. Recurrence intervals of this order should have produced about 50 km of strike-slip offset since Miocene time. No subsurface evidence for such large displacements is known within the seismic zone. Moreover, the irregular fault pattern forming a compressive step that one sees today is not compatible with large displacements. There are at least three possible interpretations of the observations of short recurrence intervals and high strain rates, but apparently youthful fault geometry and lack of major post-Miocene deformation. One is that the seismological and geodetic evidence are misleading. A second possibility is that activity in the region is cyclic. That is, the geological and geodetic observations that suggest relatively short recurrence intervals reflect a time of high, but geologically temporary, pore-fluid pressure. Zoback and Zoback have suggested such a model for intraplate seismicity in general. Alternatively, the New Madrid seismic zone is geologically young feature that has been active for only the last few tens of thousands of years. In support of this, observe an irregular fault geometry associated with a unstable compressive step, a series of en echelon and discontinuous lineaments that may define the position of a youthful linking fault, and the general absence of significant post-Eocene faulting or topography.

3. Bond selective chemistry beyond the adiabatic approximation

SciTech Connect

Butler, L.J.

1993-12-01

One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

4. An approximation technique for jet impingement flow

SciTech Connect

Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

2015-03-10

The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

5. Comparison of two Pareto frontier approximations

Berezkin, V. E.; Lotov, A. V.

2014-09-01

A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.

6. Fractal Trigonometric Polynomials for Restricted Range Approximation

Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

2016-05-01

One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

7. An approximate quantal treatment to obtain the energy levels of tetra-atomic X ṡṡṡ I2 ṡṡṡ Y van der Waals clusters (X,Y=He,Ne)

García-Vela, A.; Villarreal, P.; Delgado-Barrio, G.

1990-01-01

The structure of tetra-atomic X ṡṡṡ I2 ṡṡṡ Y van der Waals (vdW) clusters, where X,Y=He,Ne, is studied using an approximate quantal treatment. In this model the above complexes are treated as like diatomic molecules with the rare-gas atoms playing the role of electrons in conventional diatomics. Then a H2-like molecular-orbital formalism is applied, choosing the discrete states of triatomic systems I2 ṡṡṡ X(Y) as molecular orbitals. Calculations at fixed configurations as well as including vdW bending motions restricted to the plane perpendicular to the I2 axis have been carried out for the sake of comparison with previous results. Finally, the restrictions are relaxed and the vdW bending motions are incorporated in a full way within the framework of a configuration interaction. The structure of these clusters is also studied through the probability density function.

8. Some approximation theorems for a general class of discrete type operators in spaces with a polynomial weight

Magnucka-Blandzi, Ewa; Walczak, Zbigniew

2016-06-01

The paper is devoted approximation of real-valued functions defined on an unbounded interval by linear and non-linear operators. The publication pays special attention to defining class of operators and examining their certain approximation properties. The form of the class of operators given in the present paper makes the achieved results more helpful from the computational point of view.

9. Probable detection of solar neutrons by ground-level neutron monitors during STIP interval 16

NASA Technical Reports Server (NTRS)

Shea, M. A.; Smart, D. F.; Flueckiger, E. O.

1987-01-01

The third solar neutron event detected by Earth-orbiting spacecraft was observed during STIP Interval XVI. The solar flare beginning at 2356 UT on 24 April l984 produced a variety of emissions including gamma rays and solar neutrons. The neutrons were observed by the SMM satellite and the neutron-decay protons were observed on the ISEE-3 spacecraft. Between 0000 and 0010 UT on 25 April an increase of 0.7 and 1.7 percent was recorded by neutron monitors at Tokyo (Itabashi) and Morioka, Japan. These stations were located about 42 degrees from the sub-solar point, and consequently, these is approximately 1400 grams of atmosphere between the incident neutrons at the top of the atmosphere and their detection on the Earth's surface. Nevertheless, the time coincidence of a small increase in the total counting rate of two independent neutron monitors indicates the presence of solar neutrons with energies greater than 400 MeV at the top of the Earth's atmosphere. The small increases in the counting rate emphasize the difficulty in identifying similar events using historical neutron monitor data.

10. A unified approach to the Darwin approximation

SciTech Connect

Krause, Todd B.; Apte, A.; Morrison, P. J.

2007-10-15

There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

11. Easy identification of generalized common and conserved nested intervals.

PubMed

de Montgolfier, Fabien; Raffinot, Mathieu; Rusu, Irena

2014-07-01

In this article we explain how to easily compute gene clusters, formalized by classical or generalized nested common or conserved intervals, between a set of K genomes represented as K permutations. A b-nested common (resp. conserved) interval I of size |I| is either an interval of size 1 or a common (resp. conserved) interval that contains another b-nested common (resp. conserved) interval of size at least |I|-b. When b=1, this corresponds to the classical notion of nested interval. We exhibit two simple algorithms to output all b-nested common or conserved intervals between K permutations in O(Kn+nocc) time, where nocc is the total number of such intervals. We also explain how to count all b-nested intervals in O(Kn) time. New properties of the family of conserved intervals are proposed to do so. PMID:24650221

12. Approximation of functions by asymmetric two-point hermite polynomials and its optimization

Shustov, V. V.

2015-12-01

A function is approximated by two-point Hermite interpolating polynomials with an asymmetric orders-of-derivatives distribution at the endpoints of the interval. The local error estimate is examined theoretically and numerically. As a result, the position of the maximum of the error estimate is shown to depend on the ratio of the numbers of conditions imposed on the function and its derivatives at the endpoints of the interval. The shape of a universal curve representing a reduced error estimate is found. Given the sum of the orders of derivatives at the endpoints of the interval, the ordersof-derivatives distribution is optimized so as to minimize the approximation error. A sufficient condition for the convergence of a sequence of general two-point Hermite polynomials to a given function is given.

13. Best approximation of Gaussian neural networks with nodes uniformly spaced.

PubMed

Mulero-Martinez, J I

2008-02-01

This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners. PMID:18269959

14. Generalized eikonal approximation for strong-field ionization

Cajiao Vélez, F.; Krajewska, K.; Kamiński, J. Z.

2015-05-01

We develop the eikonal perturbation theory to describe the strong-field ionization by finite laser pulses. This approach in the first order with respect to the binding potential (the so-called generalized eikonal approximation) avoids a singularity at the potential center. Thus, in contrast to the ordinary eikonal approximation, it allows one to treat rescattering phenomena in terms of quantum trajectories. We demonstrate how the first Born approximation and its domain of validity follow from eikonal perturbation theory. Using this approach, we study the coherent interference patterns in photoelectron energy spectra and their modifications induced by the interaction of photoelectrons with the atomic potential. Along with these first results, we discuss the prospects of using the generalized eikonal approximation to study strong-field ionization from multicentered atomic systems and to study other strong-field phenomena.

15. Approximating the Helium Wavefunction in Positronium-Helium Scattering

NASA Technical Reports Server (NTRS)

DiRienzi, Joseph; Drachman, Richard J.

2003-01-01

In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

16. Non-ideal boson system in the Gaussian approximation

SciTech Connect

Tommasini, P.R.; de Toledo Piza, A.F.

1997-01-01

We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

17. Automatic Abstraction for Intervals Using Boolean Formulae

Brauer, Jörg; King, Andy

Traditionally, transfer functions have been manually designed for each operation in a program. Recently, however, there has been growing interest in computing transfer functions, motivated by the desire to reason about sequences of operations that constitute basic blocks. This paper focuses on deriving transfer functions for intervals - possibly the most widely used numeric domain - and shows how they can be computed from Boolean formulae which are derived through bit-blasting. This approach is entirely automatic, avoids complicated elimination algorithms, and provides a systematic way of handling wrap-arounds (integer overflows and underflows) which arise in machine arithmetic.

18. Hydration thermodynamics beyond the linear response approximation.

PubMed

Raineri, Fernando O

2016-10-19

The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

19. Approximate Green's function methods for HZE transport in multilayered materials

NASA Technical Reports Server (NTRS)

Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

1993-01-01

A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

20. Dynamical Vertex Approximation for the Hubbard Model

Toschi, Alessandro

A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.

1. Approximate von Neumann entropy for directed graphs.

PubMed

Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

2014-05-01

In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks. PMID:25353841

2. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

SciTech Connect

Hirabayashi, K.; Hoshino, M.

2013-11-15

We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

3. Analytic Approximations for the Extrapolation of Lattice Data

SciTech Connect

Masjuan, Pere

2010-12-22

We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.

4. Approximate Approaches to the One-Dimensional Finite Potential Well

ERIC Educational Resources Information Center

Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

2011-01-01

The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

5. Spin-1 Heisenberg ferromagnet using pair approximation method

Mert, Murat; Kılıç, Ahmet; Mert, Gülistan

2016-06-01

Thermodynamic properties for Heisenberg ferromagnet with spin-1 on the simple cubic lattice have been calculated using pair approximation method. We introduce the single-ion anisotropy and the next-nearest-neighbor exchange interaction. We found that for negative single-ion anisotropy parameter, the internal energy is positive and heat capacity has two peaks.

6. Diffusion approximation for modeling of 3-D radiation distributions

SciTech Connect

Zardecki, A.; Gerstl, S.A.W.; De Kinder, R.E. Jr.

1985-01-01

A three-dimensional transport code DIF3D, based on the diffusion approximation, is used to model the spatial distribution of radiation energy arising from volumetric isotropic sources. Future work will be concerned with the determination of irradiances and modeling of realistic scenarios, relevant to the battlefield conditions. 8 refs., 4 figs.

7. Optimal ABC inventory classification using interval programming

Rezaei, Jafar; Salimi, Negin

2015-08-01

Inventory classification is one of the most important activities in inventory management, whereby inventories are classified into three or more classes. Several inventory classifications have been proposed in the literature, almost all of which have two main shortcomings in common. That is, the previous methods mainly rely on an expert opinion to derive the importance of the classification criteria which results in subjective classification, and they need precise item parameters before implementing the classification. While the problem has been predominantly considered as a multi-criteria, we examine the problem from a different perspective, proposing a novel optimisation model for ABC inventory classification in the form of an interval programming problem. The proposed interval programming model has two important features compared to the existing methods: it provides optimal results instead of an expert-based classification and it does not require precise values of item parameters, which are not almost always available before classification. Finally, by illustrating the proposed classification model in the form of numerical example, conclusion and suggestions for future works are presented.

8. Approximate iterative operator method for potential-field downward continuation

Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Hao, Mengcheng

2016-05-01

An approximate iterative operator method in wavenumber domain was proposed to improve the stability and accuracy of downward continuation of potential fields measured from the ground surface, marine or airborne. Firstly, the generalized iterative formula of downward continuation is derived in wavenumber domain; then, the transformational relationship between horizontal second-order partial derivatives and continuation is derived based on the Taylor series and Laplace equation, to obtain an approximate operator. By introducing this operator to the generalized iterative formula, a rapid algorithm is developed for downward continuation. The filtering and convergence characteristics of this method are analyzed for the purpose of estimating the optimal interval of number of iterations. We demonstrate the proposed method on synthetic data, and the results validate the flexibility of the proposed method. At last, we apply the proposed method to real data, and the results show the proposed method can enhance gravity anomalies generated by concealed orebodies. And in the contour obtained by making our proposed method results continue upward to measured level, the numerical results have approximate distribution and amplitude with original anomalies.

9. Happiness Scale Interval Study. Methodological Considerations.

PubMed

Kalmijn, W M; Arends, L R; Veenhoven, R

2011-07-01

The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as 'very happy' and 'pretty happy'. The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are then used to compute 'transformed' means and standard deviations. Transforming scores on different questions to the same scale allows to broadening the World Database of Happiness considerably. The central purpose of the Happiness Scale Interval Study is to identify the happiness values at which respondents change their judgment from e.g. 'very happy' to 'pretty happy' or the reverse. This paper deals with the methodological/statistical aspects of this approach. The central question is always how to convert the frequencies at which the different possible responses to the same question given by a sample into information on the happiness distribution in the relevant population. The primary (cl)aim of this approach is to achieve this in a (more) valid way. To this end, a model is introduced that allows for dealing with happiness as a latent continuous random variable, in spite of the fact that it is measured as a discrete one. The [0,10] scale is partitioned in as many contiguous parts as the number of possible ratings in the primary scale sums up to. Any subject with a (self-perceived) happiness in the same subinterval is assumed to select the same response. For the probability density function of this happiness random variable, two options are discussed. The first one postulates a uniform distribution within each of the different subintervals of the [0,10] scale. On the basis of these results, the mean value and variance of the complete distribution can be estimated. The method is described, including the precision of the estimates obtained in this way. The second option assumes the happiness distribution to be described

10. Approximate Analysis of Semiconductor Laser Arrays

NASA Technical Reports Server (NTRS)

Marshall, William K.; Katz, Joseph

1987-01-01

Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

11. Constructive approximate interpolation by neural networks

Llanas, B.; Sainz, F. J.

2006-04-01

We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

12. The Verification of Influence of the Point "C" Position from Given Interval to Solving Systems with Highspeed Feedback

Bajčičáková, Ingrida; Jurovatá, Dominika

2015-08-01

This article deals with the design of effective numerical scheme for solving three point boundary value problems for second-order nonlinear singularly perturbed differential equations with initial conditions. Especially, it is focused on the analysis of the solutions when the point c from given interval is not the centre of this interval. The obtained system of nonlinear algebraic equations is solved by Newthon-Raphson method in MATLAB. It also verifies the convergence of approximate solutions of an original problem to the solution of reduced problem. We discuss the solution of a given problem with the situation when the point c is in the middle of the given interval.

13. Magnetoelectric charge states of matter-energy. A second approximation. Part VII. Diffuse relativistic superconductive plasma. Measurable and non-measurable physical manifestations. Kirlian photography. Laser phenomena. Cosmic effects on chemical and biological systems.

PubMed

Cope, F W

1980-01-01

Experimental evidence suggests that all objects, and especially living objects, contain and are surrounded by diffuse clouds of matter-energy probably best considered as a superconductive plasma state and best analyzed by application of an extended form of the Einstein special theory of relativity. Such a plasma state would have physical properties that for relativistic reasons the experimentalists could not expect to measure, and also those he could expect to measure. Not possible to measure should be (a) absorption or reflection of light, (b) electric charge mobilities of Hall effects, and (c) any particulate structure within the plasma. Possible to measure should be (a) channel formation ("arcing") in high applied electric fields (e.g., as in Kirlian photography), (b) effects of the plasma on temperatures and potentials of electrons in solid objects moving through that plasma, (c) facilitation of coupling between electromagnetic oscillations in sets of adjacent molecules, resulting in facilitation of laser and maser emissions of electromagnetic waves and in facilitation of geometrical alignment of adjacent molecules, and (d) magnetic and electric flux trapping with resultant magnetic and/or electric dipole moments. Experimental evidence suggests that diffuse superconductive plasma may reach the earth from the sun, resulting in diurnal and seasonal fluctuations in rates of antigen-antibody reactions as well as in rates of precipitation and crystallization of solids from solutions. PMID:7454856

14. Multipoint linkage analysis using sib pairs: A interval mapping approach for dichotomous outcomes

SciTech Connect

Olson, J.M.

1995-03-01

I propose an interval mapping approach suitable for a dichotomous outcome, with emphasis on samples of affected sib pairs. The method computes a lod score for each of a set of locations in the interval between two flanking markers and takes as its estimate of trait-locus location the maximum lod score in the interval, provided it exceeds the prespecified critical value. Use of the method depends on prior knowledge of the genetic model for the disease only through available estimates of recurrence risk to relatives of affected individuals. The method gives an unbiased estimate of location, provided the recurrence risks are correctly specified and provided the marker identity-by-descent probabilities are jointly, rather than individually, estimated. I also discuss use of the method for traits determined by two loci and give an approximation that has good power for a wide range of two-locus models. 25 refs., 2 figs., 9 tabs.

15. Interval estimates for closure-phase and closure-amplitude imaging in radio astronomy

NASA Technical Reports Server (NTRS)

Kreinovich, Vladik; Bernat, Andrew; Kosheleva, Olga; Finkel'shtejn, Andrej

1992-01-01

Interval estimates for closure-phase and closure-amplitude imaging that enable the reconstruction of a radioimage from results of approximate measurements are presented. If the intervals for the measured values are known, the precision of the result of the reconstruction cannot be solved by standard interval methods, because the phase value is based on a circle but not on a real line. If the phase theta (x bar) is measured with precision epsilon, so that the closure phase theta (x bar) + theta (y bar) - theta (x bar + y bar) is known with precision 3 epsilon, then from these measurements theta can be reconstructed with precision 6 epsilon. Similar estimates are given for closure amplitude.

16. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

2002-08-01

We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

17. Length of the current interglacial period and interglacial intervals of the last million years

Dergachev, V. A.

2015-12-01

It was ascertained that the long-term cyclical oscillations of the global climate of the Earth between glacial and interglacial states for the last million years respond to cyclical oscillations of the orbital parameters of the Earth. Cold glacial states with a period of approximately 100 ka give way to shorter intervals of warming of around 10-12 ka long. The current interglacial period—the so-called Holocene—started on Earth roughly 10 ka ago. The length of the current interglacial period and the causes of the climate change over the last approximately 50 years arouse sharp debates connected with the growing anthropogenic emission of greenhouse gases. To estimate the length of the current interglacial period, interglacial intervals near ~400 (MIS-11) and ~800 (MIS-19) ka are analyzed as its probable analogs.

18. Short-interval estimations of trigonometric parallaxes

NASA Technical Reports Server (NTRS)

Gatewood, G.; Stein, J.; Difatta, C.; Kiewiet De Jonge, J.; Prosser, J.; Reiland, T.

1985-01-01

A technique for estimating trigonometric parallaxes in a matter of days or weeks is presented. The technique relies on the discrepancy between the instantaneous proper motion and the proper motion of a star. The main sources of error in the method are the standard error of the individual observations (0.004 arcsec) and the arbitrary limit placed on the observation interval. The parallactic motion of an MO dwarf of known parallax and a blue magnitude of 11.2 is determined. The slope inferred is within 10 percent of the value (0.555 arcsec) derived from a 1.5-year study of Barnard's star. It is concluded that the technique, if used on the same time scale as conventional techniques, would yield results of much higher accuracy.

19. Thermal effects and sudden decay approximation in the curvaton scenario

SciTech Connect

Kitajima, Naoya; Takesako, Tomohiro; Yokoyama, Shuichiro; Langlois, David; Takahashi, Tomo E-mail: langlois@apc.univ-paris7.fr E-mail: takesako@icrr.u-tokyo.ac.jp

2014-10-01

We study the impact of a temperature-dependent curvaton decay rate on the primordial curvature perturbation generated in the curvaton scenario. Using the familiar sudden decay approximation, we obtain an analytical expression for the curvature perturbation after the decay of the curvaton. We then investigate numerically the evolution of the background and of the perturbations during the decay. We first show that the instantaneous transfer coefficient, related to the curvaton energy fraction at the decay, can be extended into a more general parameter, which depends on the net transfer of the curvaton energy into radiation energy or, equivalently, on the total entropy ratio after the complete curvaton decay. We then compute the curvature perturbation and compare this result with the sudden decay approximation prediction.

20. Quantum anti-Zeno effect without rotating wave approximation

SciTech Connect

Ai Qing; Sun, C. P.; Li Yong; Zheng Hang

2010-04-15

In this article, we systematically study the spontaneous decay phenomenon of a two-level system under the influences of both its environment and repetitive measurements. In order to clarify some well-established conclusions about the quantum Zeno effect (QZE) and the quantum anti-Zeno effect (QAZE), we do not use the rotating wave approximation (RWA) in obtaining an effective Hamiltonian. We examine various spectral distributions by making use of our present approach in comparison with other approaches. It is found that with respect to a bare excited state even without the RWA, the QAZE can still happen for some cases, for example, the interacting spectra of hydrogen. However, for a physical excited state, which is a renormalized dressed state of the atomic state, the QAZE disappears and only the QZE remains. These discoveries inevitably show a transition from the QZE to the QAZE as the measurement interval changes.

1. Spline approximations for nonlinear hereditary control systems

NASA Technical Reports Server (NTRS)

Daniel, P. L.

1982-01-01

A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

2. Quirks of Stirling's Approximation

ERIC Educational Resources Information Center

Macrae, Roderick M.; Allgeier, Benjamin M.

2013-01-01

Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

3. Taylor approximations of multidimensional linear differential systems

2016-06-01

The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.

4. Computing Functions by Approximating the Input

ERIC Educational Resources Information Center

Goldberg, Mayer

2012-01-01

In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

5. Diagonal Pade approximations for initial value problems

SciTech Connect

Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

1987-06-01

Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

6. Inversion and approximation of Laplace transforms

NASA Technical Reports Server (NTRS)

Lear, W. M.

1980-01-01

A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

7. An approximation for inverse Laplace transforms

NASA Technical Reports Server (NTRS)

Lear, W. M.

1981-01-01

Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

SciTech Connect

Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )

1990-12-01

Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.

9. The effect of instrumental timbre on interval discrimination.

PubMed

Zarate, Jean Mary; Ritson, Caroline R; Poeppel, David

2013-01-01

We tested non-musicians and musicians in an auditory psychophysical experiment to assess the effects of timbre manipulation on pitch-interval discrimination. Both groups were asked to indicate the larger of two presented intervals, comprised of four sequentially presented pitches; the second or fourth stimulus within a trial was either a sinusoidal (or "pure"), flute, piano, or synthetic voice tone, while the remaining three stimuli were all pure tones. The interval-discrimination tasks were administered parametrically to assess performance across varying pitch distances between intervals ("interval-differences"). Irrespective of timbre, musicians displayed a steady improvement across interval-differences, while non-musicians only demonstrated enhanced interval discrimination at an interval-difference of 100 cents (one semitone in Western music). Surprisingly, the best discrimination performance across both groups was observed with pure-tone intervals, followed by intervals containing a piano tone. More specifically, we observed that: 1) timbre changes within a trial affect interval discrimination; and 2) the broad spectral characteristics of an instrumental timbre may influence perceived pitch or interval magnitude and make interval discrimination more difficult. PMID:24066179

10. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

PubMed

Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

2013-01-01

Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy. PMID:26589009

11. An approximate model for pulsar navigation simulation

Jovanovic, Ilija; Enright, John

2016-02-01

This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

12. Approximate error conjugation gradient minimization methods

DOEpatents

Kallman, Jeffrey S

2013-05-21

In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

13. Observation and Structure Determination of an Oxide Quasicrystal Approximant.

PubMed

Förster, S; Trautmann, M; Roy, S; Adeagbo, W A; Zollner, E M; Hammer, R; Schumann, F O; Meinel, K; Nayak, S K; Mohseni, K; Hergert, W; Meyerheim, H L; Widdra, W

2016-08-26

We report on the first observation of an approximant structure to the recently discovered two-dimensional oxide quasicrystal. Using scanning tunneling microscopy, low-energy electron diffraction, and surface x-ray diffraction in combination with ab initio calculations, the atomic structure and the bonding scheme are determined. The oxide approximant follows a 3^{2}.4.3.4 Archimedean tiling. Ti atoms reside at the corners of each tiling element and are threefold coordinated to oxygen atoms. Ba atoms separate the TiO_{3} clusters, leading to a fundamental edge length of the tiling 6.7 Å. PMID:27610863

14. Plasmon Pole Approximations within a GW Sternheimer implementation

Gosselin, Vincent; Cote, Michel

We use an implementation of the GW approximation that exploits a Sternheimer equation and a Lanczos procedure to circumvent the resource intensive sum over all bands and inversion of the dielectric matrix. I will present further improvement of the method that uses Plasmon Pole approximations to evaluate the integral over all frequencies analytically. A comparison study between the von Linden-Horsh and Engel-Farid approaches for energy levels of various molecules along with benchmarking of the computational ressources needed by the method will be discussed.

15. An approximate solution for interlaminar stresses in composite laminates

NASA Technical Reports Server (NTRS)

Rose, Cheryl A.; Herakovich, Carl T.

1993-01-01

An efficient approximate solution for interlaminar stresses in finite width, symmetric and unsymmetric laminated composites subjected to axial and/or bending loads is presented. The solution is based upon statically admissible stress fields which take into consideration local property mismatch effects and global equilibrium requirements. Unknown constants in the assumed stress states are determined through minimization of the laminate complementary energy. Typical results are presented for through-thickness and interlaminar stress distributions for angle-ply and cross-ply laminates subjected to axial loading. It is shown that the present formulation represents an improved, efficient approximate solution for interlaminar stresses.

16. Pair approximation and the OAI mapping in the deformed limit

Yoshinaga, N.

1989-10-01

The pair subspaces — the SD- and SDG-subspaces — are constructed. Eigenstates for a quadrupole force and transition rates for a quadrupole operator are calculated in the single j-shell-model. The SDG-pair approximation is found to be excellent in describing the low-spin states of the ground bands compared to exact shell-model calculations. The fermion interactions are mapped onto the corresponding boson ones using the mapping procedure by Otsuka, Arima and Iachello (OAI). The OAI approximation in zeroth-order fails in reproducing the ground-state energies in the deformed limit.

17. Superfluidity of heated Fermi systems in the static fluctuation approximation

SciTech Connect

Khamzin, A. A.; Nikitin, A. S.; Sitdikov, A. S.

2015-10-15

Superfluidity properties of heated finite Fermi systems are studied in the static fluctuation approximation, which is an original method. This method relies on a single and controlled approximation, which permits taking correctly into account quasiparticle correlations and thereby going beyond the independent-quasiparticle model. A closed self-consistent set of equations for calculating correlation functions at finite temperature is obtained for a finite Fermi system described by the Bardeen–Cooper–Schrieffer Hamiltonian. An equation for the energy gap is found with allowance for fluctuation effects. It is shown that the phase transition to the supefluid state is smeared upon the inclusion of fluctuations.

18. Compressibility Corrections to Closure Approximations for Turbulent Flow Simulations

SciTech Connect

Cloutman, L D

2003-02-01

We summarize some modifications to the usual closure approximations for statistical models of turbulence that are necessary for use with compressible fluids at all Mach numbers. We concentrate here on the gradient-flu approximation for the turbulent heat flux, on the buoyancy production of turbulence kinetic energy, and on a modification of the Smagorinsky model to include buoyancy. In all cases, there are pressure gradient terms that do not appear in the incompressible models and are usually omitted in compressible-flow models. Omission of these terms allows unphysical rates of entropy change.

19. Probabilistic flood forecast: Exact and approximate predictive distributions

Krzysztofowicz, Roman

2014-09-01

For quantification of predictive uncertainty at the forecast time t0, the future hydrograph is viewed as a discrete-time continuous-state stochastic process {Hn: n=1,…,N}, where Hn is the river stage at time instance tn>t0. The probabilistic flood forecast (PFF) should specify a sequence of exceedance functions {F‾n: n=1,…,N} such that F‾n(h)=P(Zn>h), where P stands for probability, and Zn is the maximum river stage within time interval (t0,tn], practically Zn=max{H1,…,Hn}. This article presents a method for deriving the exact PFF from a probabilistic stage transition forecast (PSTF) produced by the Bayesian forecasting system (BFS). It then recalls (i) the bounds on F‾n, which can be derived cheaply from a probabilistic river stage forecast (PRSF) produced by a simpler version of the BFS, and (ii) an approximation to F‾n, which can be constructed from the bounds via a recursive linear interpolator (RLI) without information about the stochastic dependence in the process {H1,…,Hn}, as this information is not provided by the PRSF. The RLI is substantiated by comparing the approximate PFF against the exact PFF. Being reasonably accurate and very simple, the RLI may be attractive for real-time flood forecasting in systems of lesser complexity. All methods are illustrated with a case study for a 1430 km headwater basin wherein the PFF is produced for a 72-h interval discretized into 6-h steps.

20. Pigeons' Memory for Number of Events: Effects of Intertrial Interval and Delay Interval Illumination

ERIC Educational Resources Information Center

Hope, Chris; Santi, Angelo

2004-01-01

In Experiment 1, pigeons were trained at a 0-s baseline delay to discriminate sequences of light flashes (illumination of the feeder) that varied in number but not time (2f/4s and 8f/4s). During training, the intertrial interval was illuminated by the houselight for Group Light, but it was dark for Group Dark. Testing conducted with dark delay…

1. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

ERIC Educational Resources Information Center

Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

2012-01-01

The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

2. Hourly Wind Speed Interval Prediction in Arid Regions

Chaouch, M.; Ouarda, T.

2013-12-01

The long and extended warm and dry summers, the low rate of rain and humidity are the main factors that explain the increase of electricity consumption in hot arid regions. In such regions, the ventilating and air-conditioning installations, that are typically the most energy-intensive among energy consumption activities, are essential for securing healthy, safe and suitable indoor thermal conditions for building occupants and stored materials. The use of renewable energy resources such as solar and wind represents one of the most relevant solutions to overcome the increase of the electricity demand challenge. In the recent years, wind energy is gaining more importance among the researchers worldwide. Wind energy is intermittent in nature and hence the power system scheduling and dynamic control of wind turbine requires an estimate of wind energy. Accurate forecast of wind speed is a challenging task for the wind energy research field. In fact, due to the large variability of wind speed caused by the unpredictable and dynamic nature of the earth's atmosphere, there are many fluctuations in wind power production. This inherent variability of wind speed is the main cause of the uncertainty observed in wind power generation. Furthermore, producing wind power forecasts might be obtained indirectly by modeling the wind speed series and then transforming the forecasts through a power curve. Wind speed forecasting techniques have received substantial attention recently and several models have been developed. Basically two main approaches have been proposed in the literature: (1) physical models such as Numerical Weather Forecast and (2) statistical models such as Autoregressive integrated moving average (ARIMA) models, Neural Networks. While the initial focus in the literature has been on point forecasts, the need to quantify forecast uncertainty and communicate the risk of extreme ramp events has led to an interest in producing probabilistic forecasts. In short term

3. Pitch strength of regular-interval click trains with different length “runs” of regular intervals

PubMed Central

Yost, William A.; Mapes-Riordan, Dan; Shofner, William; Dye, Raymond; Sheft, Stanley

2009-01-01

Click trains were generated with first- and second-order statistics following Kaernbach and Demany [J. Acoust. Soc. Am. 104, 2298–2306 (1998)]. First-order intervals are between successive clicks, while second-order intervals are those between every other click. Click trains were generated with a repeating alternation of fixed and random intervals which produce a pitch at the reciprocal of the duration of the fixed interval. The intervals were then randomly shuffled and compared to the unshuffled, alternating click trains in pitch-strength comparison experiments. In almost all comparisons for the first-order interval stimuli, the shuffled-interval click trains had a stronger pitch strength than the unshuffled-interval click trains. The shuffled-interval click trains only produced stronger pitches for second-order interval stimuli when the click trains were unfiltered. Several experimental conditions and an analysis of runs of regular and random intervals in these click trains suggest that the auditory system is sensitive to runs of regular intervals in a stimulus that contains a mix of regular and random intervals. These results indicate that fine-structure regularity plays a more important role in pitch perception than randomness, and that the long-term autocorrelation function or spectra of these click trains are not good predictors of pitch strength. PMID:15957774

4. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

2016-04-01

The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

5. About Hemispheric Differences in the Processing of Temporal Intervals

ERIC Educational Resources Information Center

Grondin, S.; Girard, C.

2005-01-01

The purpose of the present study was to identify differences between cerebral hemispheres for processing temporal intervals ranging from .9 to 1.4s. The intervals to be judged were marked by series of brief visual signals located in the left or the right visual field. Series of three (two standards and one comparison) or five intervals (four…

6. 46 CFR 176.675 - Extension of examination intervals.

Code of Federal Regulations, 2010 CFR

2010-10-01

... 46 Shipping 7 2010-10-01 2010-10-01 false Extension of examination intervals. 176.675 Section 176... 100 GROSS TONS) INSPECTION AND CERTIFICATION Hull and Tailshaft Examinations § 176.675 Extension of examination intervals. The intervals between drydock examinations and internal structural...

7. Application of Sequential Interval Estimation to Adaptive Mastery Testing

ERIC Educational Resources Information Center

Chang, Yuan-chin Ivan

2005-01-01

In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

8. Central Difference Interval Method for Solving the Wave Equation

SciTech Connect

Szyszka, Barbara

2010-09-30

This paper presents path of construction the interval method of second order for solving the wave equation. Taken into consideration is the central difference interval method for one-dimensional partial differential equation. Numerical results, obtained by two presented algorithms, in floating-point interval arithmetic are considered.

9. ENERGY RELAXATION OF HELIUM ATOMS IN ASTROPHYSICAL GASES

SciTech Connect

Lewkow, N. R.; Kharchenko, V.; Zhang, P.

2012-09-01

We report accurate parameters describing energy relaxation of He atoms in atomic gases, important for astrophysics and atmospheric science. Collisional energy exchange between helium atoms and atomic constituents of the interstellar gas, heliosphere, and upper planetary atmosphere has been investigated. Energy transfer rates, number of collisions required for thermalization, energy distributions of recoil atoms, and other major parameters of energy relaxation for fast He atoms in thermal H, He, and O gases have been computed in a broad interval of energies from 10 meV to 10 keV. This energy interval is important for astrophysical applications involving the energy deposition of energetic atoms and ions into atmospheres of planets and exoplanets, atmospheric evolution, and analysis of non-equilibrium processes in the interstellar gas and heliosphere. Angular- and energy-dependent cross sections, required for an accurate description of the momentum-energy transfer, are obtained using ab initio interaction potentials and quantum mechanical calculations for scattering processes. Calculation methods used include partial wave analysis for collisional energies below 2 keV and the eikonal approximation at energies higher than 100 eV, keeping a significant energy region of overlap, 0.1-2 keV, between these two methods for their mutual verification. The partial wave method and the eikonal approximation excellently match results obtained with each other as well as experimental data, providing reliable cross sections in the astrophysically important interval of energies from 10 meV to 10 keV. Analytical formulae, interpolating obtained energy- and angular-dependent cross sections, are presented to simplify potential applications of the reported database. Thermalization of fast He atoms in the interstellar gas and energy relaxation of hot He and O atoms in the upper atmosphere of Mars are considered as illustrative examples of potential applications of the new database.

10. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

SciTech Connect

Semerák, O.

2015-02-10

A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

11. Detecting Gravitational Waves using Pade Approximants

Porter, E. K.; Sathyaprakash, B. S.

1998-12-01

We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.

12. Alternative approximation concepts for space frame synthesis

NASA Technical Reports Server (NTRS)

Lust, R. V.; Schmit, L. A.

1985-01-01

A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

13. Approximate knowledge compilation: The first order case

SciTech Connect

Val, A. del

1996-12-31

Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

14. Adiabatic approximation for nucleus-nucleus scattering

SciTech Connect

Johnson, R.C.

2005-10-14

Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.

15. Information geometry of mean-field approximation.

PubMed

Tanaka, T

2000-08-01

I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246

16. Polynomial approximations of a class of stochastic multiscale elasticity problems

Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing

2016-06-01

We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together

17. Beam normal spin asymmetry in the quasireal Compton scattering approximation

SciTech Connect

Gorchtein, M.

2006-05-15

The two-photon exchange contribution to the single spin asymmetries with the spin orientation normal to the reaction plane is discussed for elastic electron-proton scattering in the equivalent photon approximation. In this case, the hadronic part of the two-photon exchange amplitude describes real Compton scattering (RCS). We show that in the case of the beam normal spin asymmetry this approximation selects only the photon helicity flip amplitudes of RCS. At low energies, we make use of unitarity and estimate the contribution of the {pi}N multipoles to the photon helicity flip amplitudes. In the Regge regime, the quasi-RCS (QRCS) approximation allows for a contribution from two-pion exchange, and we provide an estimate of such contributions.

18. Examining the exobase approximation: DSMC models of Titan's upper atmosphere

Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.

2015-12-01

Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.

19. Spin-polarized Hartree-Fock approximation at nonzero temperatures

Hong, Suklyun; Mahan, G. D.

1995-06-01

The Hartree-Fock exchange energy is calculated for the spin-polarized electron gas at nonzero temperatures. This calculation is done self-consistently in that the Hartree-Fock self-energy is included self-consistently in the Fermi-Dirac occupation numbers while performing a coupling constant integral. The internal energy and entropy are also considered. We calculate the first and second derivatives of the exchange energy, internal energy, and entropy with respect to number density and/or spin polarization density, which are used for calculations of response functions such as the compressibility and polarization. One should have in mind that our exchange-only scheme using the coupling-constant-integral formalism is different from the usual Hartree-Fock approximation at nonzero temperatures and is indeed its self-consistent generalization.

20. Quantization of spacetime based on a spacetime interval operator

Chiang, Hsu-Wen; Hu, Yao-Chieh; Chen, Pisin

2016-04-01

Motivated by both concepts of Adler's recent work on utilizing Clifford algebra as the linear line element d s =⟨γμ⟩ d Xμ and the fermionization of the cylindrical worldsheet Polyakov action, we introduce a new type of spacetime quantization that is fully covariant. The theory is based on the reinterpretation of Adler's linear line element as d s =γμ⟨λ γμ⟩ , where λ is the characteristic length of the theory. We name this new operator the "spacetime interval operator" and argue that it can be regarded as a natural extension to the one-forms in the U (s u (2 )) noncommutative geometry. By treating Fourier momentum as the particle momentum, the generalized uncertainty principle of the U (s u (2 )) noncommutative geometry, as an approximation to the generalized uncertainty principle of our theory, is derived and is shown to have a lowest order correction term of the order p2 similar to that of Snyder's. The holography nature of the theory is demonstrated and the predicted fuzziness of the geodesic is shown to be much smaller than conceivable astrophysical bounds.

1. Linkage disequilibrium interval mapping of quantitative trait loci

PubMed Central

Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

2006-01-01

Background For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Results Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Conclusion Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates. PMID:16542433

2. Interval cancers in a national colorectal cancer screening programme

PubMed Central

Stanners, Greig; Lang, Jaroslaw; Brewster, David H; Carey, Francis A; Fraser, Callum G

2016-01-01

Background Little is known about interval cancers (ICs) in colorectal cancer (CRC) screening. Objective The purpose of this study was to identify IC characteristics and compare these with screen-detected cancers (SCs) and cancers in non-participants (NPCs) over the same time period. Design This was an observational study done in the first round of the Scottish Bowel Screening Programme. All individuals (772,790), aged 50–74 years, invited to participate between 1 January 2007 and 31 May 2009 were studied by linking their screening records with confirmed CRC records in the Scottish Cancer Registry (SCR). Characteristics of SC, IC and NPC were determined. Results There were 555 SCs, 502 ICs and 922 NPCs. SCs were at an earlier stage than ICs and NPCs (33.9% Dukes’ A as against 18.7% in IC and 11.3% in NPC), screening preferentially detected cancers in males (64.7% as against 52.8% in IC and 59.7% in NPC): this was independent of a different cancer site distribution in males and females. SC in the colon were less advanced than IC, but not in the rectum. Conclusion ICs account for 47.5% of the CRCs in the screened population, indicating approximately 50% screening test sensitivity: guaiac faecal occult blood testing (gFOBT) sensitivity is less for women than for men and gFOBT screening may not be effective for rectal cancer.

3. High-intensity interval training: Modulating interval duration in overweight/obese men

PubMed Central

Smith-Ryan, Abbie E.; Melvin, Malia N.; Wingfield, Hailee L.

2015-01-01

Introduction High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. Objectives In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Methods Twenty-five men [body mass index (BMI) > 25 kg·m2] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%–100%) (2MIN-HIIT). Results There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg−1·min−1) or 2MIN groups (2.7 ml·kg−1·min−1). IN sensitivity (HOMA-IR) improved for both training groups (Δ −2.78 ± 3.48 units; p < 0.05) compared to CON. Conclusion HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males. PMID:25913937

4. The Interval approach to braneworld gravity

SciTech Connect

Carena, Marcela; Lykken, Joseph D.; Park, Minjoon; /Chicago U., EFI

2005-06-01

Gravity in five-dimensional braneworld backgrounds may exhibit extra scalar degrees of freedom with problematic features, including kinetic ghosts and strong coupling behavior. Analysis of such effects is hampered by the standard heuristic approaches to braneworld gravity, which use the equations of motion as the starting point, supplemented by orbifold projections and junction conditions. Here we develop the interval approach to braneworld gravity, which begins with an action principle. This shows how to implement general covariance, despite allowing metric fluctuations that do not vanish on the boundaries. We reproduce simple Z{sub 2} orbifolds of gravity, even though in this approach we never perform a Z{sub 2} projection. We introduce a family of ''straight gauges'', which are bulk coordinate systems in which both branes appear as straight slices in a single coordinate patch. Straight gauges are extremely useful for analyzing metric fluctuations in braneworld models. By explicit gauge fixing, we show that a general AdS{sub 5}/AdS{sub 4} setup with two branes has at most a radion, but no physical ''brane-bending'' modes.

5. Statistical Properties of Extreme Solar Activity Intervals

Lioznova, A. V.; Blinov, A. V.

2014-01-01

A study of long-term solar variability reflected in indirect indices of past solar activity leads to stimulating results. We compare the statistics of intervals of very low and very high solar activity derived from two cosmogenic radionuclide records and look for consistency in their timing and physical interpretation. According to the applied criteria, the numbers of minima and of maxima are 61 and 68, respectively, from the 10Be record, and 42 and 46 from the 14C record. The difference between the enhanced and depressed states of solar activity becomes apparent in the difference in their statistical distributions. We find no correlation between the level or type (minimum or maximum) of an extremum and the level or type of the predecessor. The hypothesis of solar activity as a periodic process on the millennial time scale is not supported by the existing proxies. A new homogeneous series of 10Be measurements in polar ice covering the Holocene would be of great value for eliminating the existing discrepancy in the available solar activity reconstructions.

6. An Investigation of Interval Management Displays

NASA Technical Reports Server (NTRS)

Swieringa, Kurt A.; Wilson, Sara R.; Shay, Rick

2015-01-01

NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to transition the most mature ATM technologies from the laboratory to the National Airspace System. One selected technology is Interval Management (IM), which uses onboard aircraft automation to compute speeds that help the flight crew achieve and maintain precise spacing behind a preceding aircraft. Since ATD-1 focuses on a near-term environment, the ATD-1 flight demonstration prototype requires radio voice communication to issue an IM clearance. Retrofit IM displays will enable pilots to both enter information into the IM avionics and monitor IM operation. These displays could consist of an interface to enter data from an IM clearance and also an auxiliary display that presents critical information in the primary field-of-view. A human-in-the-loop experiment was conducted to examine usability and acceptability of retrofit IM displays, which flight crews found acceptable. Results also indicate the need for salient alerting when new speeds are generated and the desire to have a primary field of view display available that can display text and graphic trend indicators.

7. Statistical Coding and Decoding of Heartbeat Intervals

PubMed Central

Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C.; Ohnishi, Noboru

2011-01-01

The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems. PMID:21694763

8. A Survey of Techniques for Approximate Computing

DOE PAGESBeta

Mittal, Sparsh

2016-03-18

Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

9. Adiabatic approximation for the density matrix

Band, Yehuda B.

1992-05-01

An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

10. Approximate probability distributions of the master equation

Thomas, Philipp; Grima, Ramon

2015-07-01

Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

11. An approximation method for electrostatic Vlasov turbulence

NASA Technical Reports Server (NTRS)

Klimas, A. J.

1979-01-01

Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.

12. Linear Approximation SAR Azimuth Processing Study

NASA Technical Reports Server (NTRS)

Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

1979-01-01

A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

13. Approximation concepts for efficient structural synthesis

NASA Technical Reports Server (NTRS)

Schmit, L. A., Jr.; Miura, H.

1976-01-01

It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

14. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines.

PubMed

Gu, S

2016-08-01

Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature (T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing. PMID:26589826

15. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines

Gu, S.

2016-08-01

Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.

16. Some Recent Progress for Approximation Algorithms

Kawarabayashi, Ken-ichi

We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.

17. Polynomial approximation of functions in Sobolev spaces

NASA Technical Reports Server (NTRS)

Dupont, T.; Scott, R.

1980-01-01

Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

18. Approximate Solutions Of Equations Of Steady Diffusion

NASA Technical Reports Server (NTRS)

Edmonds, Larry D.

1992-01-01

Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

19. Polynomial approximation of functions in Sobolev spaces

SciTech Connect

Dupont, T.; Scott, R.

1980-04-01

Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

20. Spectrally-Invariant Approximation Within Atmospheric Radiative Transfer

NASA Technical Reports Server (NTRS)

Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

2011-01-01

Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These "spectrally invariant relationships" are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in clouddominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction. and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with ID radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.