Science.gov

Sample records for energy interval approximation

  1. Function approximation using adaptive and overlapping intervals

    SciTech Connect

    Patil, R.B.

    1995-05-01

    A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

  2. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  3. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  4. Approximate Confidence Interval for Difference of Fit in Structural Equation Models.

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2001-01-01

    Discusses a method, based on bootstrap methodology, for obtaining an approximate confidence interval for the difference in root mean square error of approximation of two structural equation models. Illustrates the method using a numerical example. (SLD)

  5. A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators

    PubMed Central

    Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong

    2014-01-01

    Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065

  6. Approximate representations of random intervals for hybrid uncertainty quantification in engineering modeling

    SciTech Connect

    Joslyn, C.

    2004-01-01

    We review our approach to the representation and propagation of hybrid uncertainties through high-complexity models, based on quantities known as random intervals. These structures have a variety of mathematical descriptions, for example as interval-valued random variables, statistical collections of intervals, or Dempster-Shafer bodies of evidence on the Borel field. But methods which provide simpler, albeit approximate, representations of random intervals are highly desirable, including p-boxes and traces. Each random interval, through its cumulative belief and plausibility measures functions, generates a unique p-box whose constituent CDFs are all of those consistent with the random interval. In turn, each p-box generates an equivalence class of random intervals consistent with it. Then, each p-box necessarily generates a unique trace which stands as the fuzzy set representation of the p-box or random interval. In turn each trace generates an equivalence class of p-boxes. The heart of our approach is to try to understand the tradeoffs between error and simplicity introduced when p-boxes or traces are used to stand in for various random interval operations. For example, Joslyn has argued that for elicitation and representation tasks, traces can be the most appropriate structure, and has proposed a method for the generation of canonical random intervals from elicited traces. But alternatively, models built as algebraic equations of uncertainty-valued variables (in our case, random-interval-valued) propagate uncertainty through convolution operations on basic algebraic expressions, and while convolution operations are defined on all three structures, we have observed that the results of only some of these operations are preserved as one moves through these three levels of specificity. We report on the status and progress of this modeling approach concerning the relations between these mathematical structures within this overall framework.

  7. Using the Delta Method for Approximate Interval Estimation of Parameter Functions in SEM

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2004-01-01

    In applications of structural equation modeling, it is often desirable to obtain measures of uncertainty for special functions of model parameters. This article provides a didactic discussion of how a method widely used in applied statistics can be employed for approximate standard error and confidence interval evaluation of such functions. The…

  8. Approximate Confidence Intervals for Estimates of Redundancy between Sets of Variables.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1989-01-01

    Bootstrap methodology is presented that yields approximations of the sampling variation of redundancy estimates while assuming little a priori knowledge about the distributions of these statistics. Results of numerical demonstrations suggest that bootstrap confidence intervals may offer substantial assistance in interpreting the results of…

  9. An approximation of interval type-2 fuzzy controllers using fuzzy ratio switching type-1 fuzzy controllers.

    PubMed

    Tao, C W; Taur, Jinshiuh; Chuang, Chen-Chia; Chang, Chia-Wen; Chang, Yeong-Hwa

    2011-06-01

    In this paper, the interval type-2 fuzzy controllers (FC(IT2)s) are approximated using the fuzzy ratio switching type-1 FCs to avoid the complex type-reduction process required for the interval type-2 FCs. The fuzzy ratio switching type-1 FCs (FC(FRST1)s) are designed to be a fuzzy combination of the possible-leftmost and possible-rightmost type-1 FCs. The fuzzy ratio switching type-1 fuzzy control technique is applied with the sliding control technique to realize the hybrid fuzzy ratio switching type-1 fuzzy sliding controllers (HFSC(FRST1)s) for the double-pendulum-and-cart system. The simulation results and comparisons with other approaches are provided to demonstrate the effectiveness of the proposed HFSC(FRST1)s. PMID:21189244

  10. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  11. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  12. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  13. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals.

    PubMed

    Kelley, Ken; Lai, Keke

    2011-02-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively small range at the specified level of confidence. The accuracy in parameter estimation approach to sample size planning is developed for the RMSEA so that the confidence interval for the population RMSEA will have a width whose expectation is sufficiently narrow. Analytic developments are shown to work well with a Monte Carlo simulation study. Freely available computer software is developed so that the methods discussed can be implemented. The methods are demonstrated for a repeated measures design where the way in which social relationships and initial depression influence coping strategies and later depression are examined.

  14. Energy Equation Approximation in Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Goldstein, Arthur W.

    1959-01-01

    There is some confusion in the literature of fluid mechanics in regard to the correct form of the energy equation for the study of the flow of nearly incompressible fluids. Several forms of the energy equation and their use are therefore discussed in this note.

  15. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  16. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  17. Proportional damping approximation using the energy gain and simultaneous perturbation stochastic approximation

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2010-10-01

    The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.

  18. New approximation for the effective energy of nonlinear conducting composites

    NASA Astrophysics Data System (ADS)

    Gibiansky, Leonid; Torquato, Salvatore

    1998-07-01

    Approximations for the effective energy and, thus, effective conductivity of nonlinear, isotropic conducting dispersions are developed. This is accomplished by using the Ponte Castaneda variational principles [Philos. Trans. R. Soc. London Ser. A 340, 1321 (1992)] and the Torquato approximation [J. Appl. Phys. 58, 3790 (1985)] of the effective conductivity of corresponding linear composites. The results are obtained for dispersions with superconducting or insulating inclusions, and, more generally, for phases with a power-law energy. It is shown that the new approximations lie within the best available rigorous upper and lower bounds on the effective energy.

  19. Energy flow: image correspondence approximation for motion analysis

    NASA Astrophysics Data System (ADS)

    Wang, Liangliang; Li, Ruifeng; Fang, Yajun

    2016-04-01

    We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

  20. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  1. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  2. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

    ERIC Educational Resources Information Center

    Viechtbauer, Wolfgang

    2007-01-01

    Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

  3. Local Density Approximation Exchange-correlation Free-energy Functional

    NASA Astrophysics Data System (ADS)

    Karasiev, Valentin; Sjostrom, Travis; Dufty, James; Trickey, S. B.

    2014-03-01

    Restricted path integral Monte-Carlo (RPIMC) simulation data for the homogeneous electron gas at finite temperatures are used to fit the exchange-correlation free energy as a function of the density and temperature. Together with a new finite- T spin-polarization interpolation, this provides the local spin density approximation (LSDA) for the exchange-correlation free-energy functional required by finite- T density functional theory. We discuss and compare different methods of fitting to the RPIMC data. The new function reproduces the RPIMC data in the fitting range of Wigner-Seitz radius and temperature, satisfies correct high-density, low- and high- T asymptotic limits and is applicable beyond the range of fitting data. Work supported by U.S. Dept. of Energy, grant DE-SC0002139 and by the DOE Office of Fusion Sciences (FES).

  4. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  5. Diagrammatic self-energy approximations and the total particle number

    NASA Astrophysics Data System (ADS)

    Schindlmayr, Arno; García-González, P.; Godby, R. W.

    2001-12-01

    There is increasing interest in many-body perturbation theory as a practical tool for the calculation of ground-state properties. As a consequence, unambiguous sum rules such as the conservation of particle number under the influence of the Coulomb interaction have acquired an importance that did not exist for calculations of excited-state properties. In this paper we obtain a rigorous, simple relation whose fulfilment guarantees particle-number conservation in a given diagrammatic self-energy approximation. Hedin's G0W0 approximation does not satisfy this relation and hence violates the particle-number sum rule. Very precise calculations for the homogeneous electron gas and a model inhomogeneous electron system allow the extent of the nonconservation to be estimated.

  6. Energy loss and (de)coherence effects beyond eikonal approximation

    NASA Astrophysics Data System (ADS)

    Apolinário, Liliana; Armesto, Néstor; Milhano, Guilherme; Salgado, Carlos A.

    2014-11-01

    The parton branching process is known to be modified in the presence of a medium. Colour decoherence processes are known to determine the process of energy loss when the density of the medium is large enough to break the correlations between partons emitted from the same parent. In order to improve existing calculations that consider eikonal trajectories for both the emitter and the hardest emitted parton, we provide in this work the calculation of all finite energy corrections for the gluon radiation off a quark in a QCD medium that exist in the small angle approximation and for static scattering centres. Using the path integral formalism, all particles are allowed to undergo Brownian motion in the transverse plane and the offspring is allowed to carry an arbitrary fraction of the initial energy. The result is a general expression that contains both coherence and decoherence regimes that are controlled by the density of the medium and by the amount of broadening that each parton acquires independently.

  7. Constrained Parmeterization of Reduced Density Approximation of Kinetic Energy Functionals

    NASA Astrophysics Data System (ADS)

    Chakraborty, Debajit; Trickey, Samuel; Karasiev, Valentin

    2014-03-01

    Evaluation of forces in ab initio MD is greatly accelerated by orbital-free DFT, especially at finite temperature. The recent achievement of a fully non-empirical constraint-based generalized gradient (GGA) functional for the Kohn-Sham KE Ts [ n ] brings to light the inherent limitations of GGAs. This motivates inclusion of higher-order derivatives in the form of reduced derivative approximation (RDA) functionals. That, in turn, requires new functional forms and design criteria. RDA functionals are constrained further to produce a positive-definite, non-singular Pauli potential. We focus on designing a non-empirical constraint-based meta-GGA functional with certain combinations of higher-order derivatives which avoid nuclear-site singularities to a specified order of gradient expansion. Here we report progress on this agenda. Work supported by U.S. Dept. of Energy, grant DE-SC0002139.

  8. Using temperature-programmed desorption and the condensation approximation to determine surface site-energy distributions: examining the approximation's bases.

    SciTech Connect

    Brown, L. F.; Travis, B. J.

    2004-01-01

    Investigators (e.g., Seebauer 1994, Bogillo and Shkilev 1999) have used the condensation approximation (CA) successfully for determining broad nonuniform surface site-energy distributions (SEDs) from temperature-programmed desorption (TPD) spectra and for identifying constant pre-exponential factors from peak analysis. The CA assumes that at any temperature T, desorption occurs only at sites with a single desorption activation energy (E{sub cdn}). E{sub cdn} is of course a function of T. Further, the approximation assumes that during TPD all sites with desorption energy E{sub cdn} empty at T.

  9. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

  10. Nodal approximations of varying order by energy group for solving the diffusion equation

    SciTech Connect

    Broda, J.T.

    1992-02-01

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the same order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.

  11. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    NASA Astrophysics Data System (ADS)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in

  12. Analytic energy-level densities of separable harmonic oscillators including approximate hindered rotor corrections

    NASA Astrophysics Data System (ADS)

    Döntgen, M.

    2016-09-01

    Energy-level densities are key for obtaining various chemical properties. In chemical kinetics, energy-level densities are used to predict thermochemistry and microscopic reaction rates. Here, an analytic energy-level density formulation is derived using inverse Laplace transformation of harmonic oscillator partition functions. Anharmonic contributions to the energy-level density are considered approximately using a literature model for the transition from harmonic to free motions. The present analytic energy-level density formulation for rigid rotor-harmonic oscillator systems is validated against the well-studied CO+O˙ H system. The approximate hindered rotor energy-level density corrections are validated against the well-studied H2O2 system. The presented analytic energy-level density formulation gives a basis for developing novel numerical simulation schemes for chemical processes.

  13. A Hierarchy of Transport Approximations for High Energy Heavy (HZE) Ions

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Lamkin, Stanley L.; Hamidullah, Farhat; Ganapol, Barry D.; Townsend, Lawrence W.

    1989-01-01

    The transport of high energy heavy (HZE) ions through bulk materials is studied neglecting energy dependence of the nuclear cross sections. A three term perturbation expansion appears to be adequate for most practical applications for which penetration depths are less than 30 g per sq cm of material. The differential energy flux is found for monoenergetic beams and for realistic ion beam spectral distributions. An approximate formalism is given to estimate higher-order terms.

  14. Förster Resonance Energy Transfer imaging in vivo with approximated Radiative Transfer Equation

    PubMed Central

    Soloviev, Vadim Y.; McGinty, James; Stuckey, Daniel W.; Laine, Romain; Wylezinska-Arridge, Marzena; Wells, Dominic J.; Sardini, Alessandro; Hajnal, Joseph V.; French, Paul M.W.; Arridge, Simon R.

    2012-01-01

    We describe a new light transport model that we have applied to 3-D image reconstruction of in vivo fluorescence lifetime tomography data applied to read out Förster Resonance Energy Transfer in mice. The model is an approximation to the Radiative Transfer Equation and combines light diffusion and rays optics. This approximation is well adopted to wide-field time-gated intensity based data acquisition. Reconstructed image data are presented and compared with results obtained by using the Telegraph Equation approximation. The new approach provides improved recovery of absorption and scattering parameters while returning similar values for the fluorescence parameters. PMID:22193187

  15. Interval Data Analysis with the Energy Charting and Metrics Tool (ECAM)

    SciTech Connect

    Taasevigen, Danny J.; Katipamula, Srinivas; Koran, William

    2011-07-07

    Analyzing whole building interval data is an inexpensive but effective way to identify and improve building operations, and ultimately save money. Utilizing the Energy Charting and Metrics Tool (ECAM) add-in for Microsoft Excel, building operators and managers can begin implementing changes to their Building Automation System (BAS) after trending the interval data. The two data components needed for full analyses are whole building electricity consumption (kW or kWh) and outdoor air temperature (OAT). Using these two pieces of information, a series of plots and charts and be created in ECAM to monitor the buildings performance over time, gain knowledge of how the building is operating, and make adjustments to the BAS to improve efficiency and start saving money.

  16. Ermod: fast and versatile computation software for solvation free energy with approximate theory of solutions.

    PubMed

    Sakuraba, Shun; Matubayasi, Nobuyuki

    2014-08-01

    ERmod is a software package to efficiently and approximately compute the solvation free energy using the method of energy representation. Molecular simulation is to be conducted at two condensed-phase systems of the solution of interest and the reference solvent with test-particle insertion of the solute. The subprogram ermod in ERmod then provides a set of energy distribution functions from the simulation trajectories, and another subprogram slvfe determines the solvation free energy from the distribution functions through an approximate functional. This article describes the design and implementation of ERmod, and illustrates its performance in solvent water for two organic solutes and two protein solutes. Actually, the free-energy computation with ERmod is not restricted to the solvation in homogeneous medium such as fluid and polymer and can treat the binding into weakly ordered system with nano-inhomogeneity such as micelle and lipid membrane. ERmod is available on web at http://sourceforge.net/projects/ermod.

  17. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; da Jornada, Felipe H.; Deslippe, Jack; Yang, Chao; Neaton, Jeffrey B.; Louie, Steven G.

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  18. Approximate stress-energy tensor of the massless spin-1/2 field in Schwarzschild spacetime

    SciTech Connect

    Matyjasek, Jerzy

    2005-01-15

    The approximate stress-energy tensor of the conformally invariant massless spin-1/2 field in the Hartle-Hawking state in the Schwarzschild spacetime is constructed. It is shown that by solving the conservation equation in conformal space and utilizing the regularity conditions in a physical metric one obtains the stress-energy tensor that is in good agreement with the numerical calculations. The backreaction of the quantized field upon the spacetime metric is briefly discussed.

  19. Random phase approximation correlation energy using a compact representation for linear response functions: application to solids

    NASA Astrophysics Data System (ADS)

    Kaoui, Fawzi; Rocca, Dario

    2016-01-01

    A new approach was recently presented to compute correlation energies within the random phase approximation using Lanczos chains and an optimal basis set (Rocca 2014 J. Chem. Phys. 140 18A501). This novel method avoids the explicit calculation of conduction states and represents linear response functions on a compact auxiliary basis set obtained from the diagonalization of an approximate dielectric matrix that contains only the kinetic energy contribution. Here, we extend this formalism, originally implemented for molecular systems, to treat periodic solids. In particular, the approximate dielectric matrix used to build the auxiliary basis set is generalized to avoid unphysical negative gaps, that make the model inefficient. The numerical convergence of the method is discussed and the accuracy is demonstrated considering a set including three covalently bonded (C, Si, and SiC) and three weakly bonded (Ne, Ar, and Kr) solids.

  20. Infinite order sudden approximation for rotational energy transfer in gaseous mixtures

    NASA Technical Reports Server (NTRS)

    Goldflam, R.; Kouri, D. J.; Green, S.

    1977-01-01

    Rotational energy transfer in gaseous mixtures is analyzed within the framework of the infinite order sudden (IOS) approximation, and a new derivation of the IOS from the coupled states Lippman-Schwinger equation is presented. This approach shows the relation between the IOS and coupled state T matrices. The general IOS effective cross section can be factored into a finite sum of 'spectroscopic coefficients' and 'dynamical coefficients'. The evaluation of these coefficients is considered. Pressure broadening for the systems HD-He, HCl-He, CO-He, HCl-Ar, and CO2-Ar is calculated, and results based on the IOS approximation are compared with coupled state results. The IOS approximation is found to be very accurate whenever the rotor spacings are small compared to the kinetic energy, provided closed channels do not play too great a role.

  1. Two-loop Bhabha scattering at high energy beyond leading power approximation

    NASA Astrophysics Data System (ADS)

    Penin, Alexander A.; Zerf, Nikolai

    2016-09-01

    We evaluate the two-loop O (me2/ s) contribution to the wide-angle high-energy electron-positron scattering in the double-logarithmic approximation. The origin and the general structure of the power-suppressed double logarithmic corrections are discussed in detail.

  2. Dielectric Matrix Formulation of Correlation Energies in the Random Phase Approximation: Inclusion of Exchange Effects.

    PubMed

    Mussard, Bastien; Rocca, Dario; Jansen, Georg; Ángyán, János G

    2016-05-10

    Starting from the general expression for the ground state correlation energy in the adiabatic-connection fluctuation-dissipation theorem (ACFDT) framework, it is shown that the dielectric matrix formulation, which is usually applied to calculate the direct random phase approximation (dRPA) correlation energy, can be used for alternative RPA expressions including exchange effects. Within this famework, the ACFDT analog of the second order screened exchange (SOSEX) approximation leads to a logarithmic formula for the correlation energy similar to the direct RPA expression. Alternatively, the contribution of the exchange can be included in the kernel used to evaluate the response functions. In this case, the use of an approximate kernel is crucial to simplify the formalism and to obtain a correlation energy in logarithmic form. Technical details of the implementation of these methods are discussed, and it is shown that one can take advantage of density fitting or Cholesky decomposition techniques to improve the computational efficiency; a discussion on the numerical quadrature made on the frequency variable is also provided. A series of test calculations on atomic correlation energies and molecular reaction energies shows that exchange effects are instrumental for improvement over direct RPA results. PMID:26986444

  3. Corrections to the eikonal approximation for nuclear scattering at medium energies

    NASA Astrophysics Data System (ADS)

    Buuck, Micah; Miller, Gerald A.

    2014-08-01

    The upcoming Facility for Rare Isotope Beams (FRIB) at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University has reemphasized the importance of accurate modeling of low energy nucleus-nucleus scattering. Such calculations have been simplified by using the eikonal approximation. As a high energy approximation, however, its accuracy suffers for the medium energy beams that are of current experimental interest. A prescription developed by Wallace [Phys. Rev. Lett. 27, 622 (1971), 10.1103/PhysRevLett.27.622 and Ann. Phys. (NY) 78, 190 (1973), 10.1016/0003-4916(73)90008-0] that obtains the scattering propagator as an expansion around the eikonal propagator (Glauber approach) has the potential to extend the range of validity of the approximation to lower energies. Here we examine the properties of this expansion, and calculate the first-, second-, and third-order corrections for the scattering of a spinless particle off of a Ca40 nucleus, and for nuclear breakup reactions involving Be11. We find that, including these corrections extends the lower bound of the range of validity down to energies as low as about 45 MeV. At that energy the corrections provide as much as a 15% correction to certain processes.

  4. Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation.

    PubMed

    van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-05-14

    Despite their unmatched success for many applications, commonly used local, semi-local, and hybrid density functionals still face challenges when it comes to describing long-range interactions, static correlation, and electron delocalization. Density functionals of both the occupied and virtual orbitals are able to address these problems. The particle-hole (ph-) Random Phase Approximation (RPA), a functional of occupied and virtual orbitals, has recently known a revival within the density functional theory community. Following up on an idea introduced in our recent communication [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)], we formulate more general adiabatic connections for the correlation energy in terms of pairing matrix fluctuations described by the particle-particle (pp-) propagator. With numerical examples of the pp-RPA, the lowest-order approximation to the pp-propagator, we illustrate the potential of density functional approximations based on pairing matrix fluctuations. The pp-RPA is size-extensive, self-interaction free, fully anti-symmetric, describes the strong static correlation limit in H2, and eliminates delocalization errors in H2(+) and other single-bond systems. It gives surprisingly good non-bonded interaction energies--competitive with the ph-RPA--with the correct R(-6) asymptotic decay as a function of the separation R, which we argue is mainly attributable to its correct second-order energy term. While the pp-RPA tends to underestimate absolute correlation energies, it gives good relative energies: much better atomization energies than the ph-RPA, as it has no tendency to underbind, and reaction energies of similar quality. The adiabatic connection in terms of pairing matrix fluctuation paves the way for promising new density functional approximations.

  5. Optimising sprint interval exercise to maximise energy expenditure and enjoyment in overweight boys.

    PubMed

    Crisp, Nicole A; Fournier, Paul A; Licari, Melissa K; Braham, Rebecca; Guelfi, Kym J

    2012-12-01

    The aim of this study was to identify the sprint frequency that when supplemented to continuous exercise at the intensity that maximises fat oxidation (Fat(max)), optimises energy expenditure, acute postexercise energy intake and enjoyment. Eleven overweight boys completed 30 min of either continuous cycling at Fat(max) (MOD), or sprint interval exercise that consisted of continuous cycling at Fat(max) interspersed with 4-s maximal sprints every 2 min (SI(120)), every 1 min (SI(60)), or every 30 s (SI(30)). Energy expenditure was assessed during exercise, after which participants completed a modified Physical Activity Enjoyment Scale (PACES) followed by a buffet-type breakfast to measure acute postexercise energy intake. Energy expenditure increased with increasing sprint frequency (p < 0.001), but the difference between SI(60) and SI(30) did not reach significance (p = 0.076), likely as a result of decreased sprint quality as indicated by a significant decline in peak power output from SI(60) to SI(30) (p = 0.034). Postexercise energy intake was similar for MOD, SI(120), and SI(30) (p > 0.05), but was significantly less for SI(60) compared with MOD (p = 0.025). PACES was similar for MOD, SI(120), and SI(60) (p > 0.05), but was less for SI(30) compared with MOD (p = 0.038), SI(120) (p = 0.009), and SI(60) (p = 0.052). In conclusion, SI(60) appears optimal for overweight boys given that it maximises energy expenditure (i.e., there was no additional increase in expenditure with a further increase in sprint frequency) without prompting increased energy intake. This, coupled with the fact that enjoyment was not compromised, may have important implications for increased adherence and long-term energy balance.

  6. Hamiltonian chaos acts like a finite energy reservoir: accuracy of the Fokker-Planck approximation.

    PubMed

    Riegert, Anja; Baba, Nilüfer; Gelfert, Katrin; Just, Wolfram; Kantz, Holger

    2005-02-11

    The Hamiltonian dynamics of slow variables coupled to fast degrees of freedom is modeled by an effective stochastic differential equation. Formal perturbation expansions, involving a Markov approximation, yield a Fokker-Planck equation in the slow subspace which respects conservation of energy. A detailed numerical and analytical analysis of suitable model systems demonstrates the feasibility of obtaining the system specific drift and diffusion terms and the accuracy of the stochastic approximation on all time scales. Non-Markovian and non-Gaussian features of the fast variables are negligible.

  7. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  8. Quantitative molecular orbital energies within a G0W0 approximation

    NASA Astrophysics Data System (ADS)

    Sharifzadeh, S.; Tamblyn, I.; Doak, P.; Darancet, P. T.; Neaton, J. B.

    2012-09-01

    Using many-body perturbation theory within a G 0 W 0 approximation, with a plane wave basis set and using a starting point based on density functional theory within the generalized gradient approximation, we explore routes for computing the ionization potential (IP), electron affinity (EA), and fundamental gap of three gas-phase molecules — benzene, thiophene, and (1,4) diamino-benzene — and compare with experiments. We examine the dependence of the IP and fundamental gap on the number of unoccupied states used to represent the dielectric function and the self energy, as well as the dielectric function plane-wave cutoff. We find that with an effective completion strategy for approximating the unoccupied subspace, and a well converged dielectric function kinetic energy cutoff, the computed IPs and EAs are in excellent quantitative agreement with available experiment (within 0.2 eV), indicating that a one-shot G 0 W 0 approach can be very accurate for calculating addition/removal energies of small organic molecules.

  9. Physical interpretation of astrophysical factor S(E) for stellar energies from the WKB approximation.

    NASA Astrophysics Data System (ADS)

    Beaumevieille, H.; Bouchemha, A.; Boudouma, Y.; Boughrara, A.; Ouichaoui, S.; Tsan, U. C.

    1999-02-01

    For "non-resonant" reactions, a physical interpretation of the astrophysical factor S(E) in terms of the Coulomb barrier penetration factor and an intrinsic nuclear factor is proposed. Using the WKB (Wentzel, Kramers, Brillouin) approximation to evaluate the penetrabilities at stellar energies, the authors point out a drastic difference between the absolute values if S(0) according to wether s or p waves dominate the reaction. The variation with energy of S(E) to first order can also sign the nature of the wave. An application for the 6Li(d,α)4He and 7Li(p,α)4He reactions is presented.

  10. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.

    PubMed

    Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen

    2016-02-01

    The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.

  11. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals

    NASA Astrophysics Data System (ADS)

    Woods, Thomas N.; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

    2015-10-01

    A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

  12. Impact of nonlocal correlations over different energy scales: A dynamical vertex approximation study

    NASA Astrophysics Data System (ADS)

    Rohringer, G.; Toschi, A.

    2016-09-01

    In this paper, we investigate how nonlocal correlations affect, selectively, the physics of correlated electrons over different energy scales, from the Fermi level to the band edges. This goal is achieved by applying a diagrammatic extension of dynamical mean field theory (DMFT), the dynamical vertex approximation (D Γ A ), to study several spectral and thermodynamic properties of the unfrustrated Hubbard model in two and three dimensions. Specifically, we focus first on the low-energy regime by computing the electronic scattering rate and the quasiparticle mass renormalization for decreasing temperatures at a fixed interaction strength. This way, we obtain a precise characterization of the several steps through which the Fermi-liquid physics is progressively destroyed by nonlocal correlations. Our study is then extended to a broader energy range, by analyzing the temperature behavior of the kinetic and potential energy, as well as of the corresponding energy distribution functions. Our findings allow us to identify a smooth but definite evolution of the nature of nonlocal correlations by increasing interaction: They either increase or decrease the kinetic energy w.r.t. DMFT depending on the interaction strength being weak or strong, respectively. This reflects the corresponding evolution of the ground state from a nesting-driven (Slater) to a superexchange-driven (Heisenberg) antiferromagnet (AF), whose fingerprints are, thus, recognizable in the spatial correlations of the paramagnetic phase. Finally, a critical analysis of our numerical results of the potential energy at the largest interaction allows us to identify possible procedures to improve the ladder-based algorithms adopted in the dynamical vertex approximation.

  13. A novel analytical approximation technique for highly nonlinear oscillators based on the energy balance method

    NASA Astrophysics Data System (ADS)

    Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris

    In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.

  14. Low-energy extensions of the eikonal approximation to heavy-ion scattering

    SciTech Connect

    Aguiar, C.E.; Aguiar, C.E.; Zardi, F.; Vitturi, A.

    1997-09-01

    We discuss different schemes devised to extend the eikonal approximation to the regime of low bombarding energies (below 50 MeV per nucleon) in heavy-ion collisions. From one side we consider the first- and second-order corrections derived from Wallace{close_quote}s expansion. As an alternative approach we examine the procedure of accounting for the distortion of the eikonal straight-line trajectory by shifting the impact parameter to the corresponding classical turning point. The two methods are tested for different combinations of colliding systems and bombarding energies, by comparing the angular distributions they provide with the exact solution of the scattering problem. We find that the best results are obtained with the shifted trajectories, the Wallace expansion showing a slow convergence at low energies, in particular for heavy systems characterized by a strong Coulomb field. {copyright} {ital 1997} {ital The American Physical Society}

  15. Effects of acute sprint interval cycling and energy replacement on postprandial lipemia.

    PubMed

    Freese, Eric C; Levine, Ari S; Chapman, Donald P; Hausman, Dorothy B; Cureton, Kirk J

    2011-12-01

    High postprandial blood triglyceride (TG) levels increase cardiovascular disease risk. Exercise interventions may be effective in reducing postprandial blood TG. The purpose of this study was to determine the effects of sprint interval cycling (SIC), with and without replacement of the energy deficit, on postprandial lipemia. In a repeated-measures crossover design, six men and six women participated in three trials, each taking place over 2 days. On the evening of the first day of each trial, the participants either did SIC without replacing the energy deficit (Ex-Def), did SIC and replaced the energy deficit (Ex-Bal), or did not exercise (control). SIC was performed on a cycle ergometer and involved four 30-s all-out sprints with 4-min active recovery. In the morning of day 2, responses to a high-fat meal were measured. Venous blood samples were collected in the fasted state and at 0, 30, 60, 120, and 180 min postprandial. There was a trend toward a reduction with treatment in fasting TG (P = 0.068), but no significant treatment effect for fasting insulin, glucose, nonesterified fatty acids, or betahydroxybutryrate (P > 0.05). The postprandial area under the curve (mmol·l(-1)·3 h(-1)) TG response was significantly lower in Ex-Def (21%, P = 0.006) and Ex-Bal (10%, P = 0.044) than in control, and significantly lower in Ex-Def (12%, P = 0.032) than in Ex-Bal. There was no treatment effect (P > 0.05) observed for area under the curve responses of insulin, glucose, nonesterified fatty acids, or betahydroxybutryrate. SIC reduces postprandial lipemia, but the energy deficit alone does not fully explain the decrease observed.

  16. HOMO band dispersion of crystalline rubrene: Effects of self-energy corrections within the GW approximation

    NASA Astrophysics Data System (ADS)

    Yanagisawa, Susumu; Morikawa, Yoshitada; Schindlmayr, Arno

    2013-09-01

    We investigate the band dispersion and relevant electronic properties of rubrene single crystals within the GW approximation. Due to the self-energy correction, the dispersion of the highest occupied molecular orbital (HOMO) band increases by 0.10 eV compared to the dispersion of the Kohn-Sham eigenvalues within the generalized gradient approximation, and the effective hole mass consequently decreases. The resulting value of 0.90 times the electron rest mass along the Γ-Y direction in the Brillouin zone is closer to experimental measurements than that obtained from density-functional theory. The enhanced bandwidth is explained in terms of the intermolecular hybridization of the HOMO(Y) wave function along the stacking direction of the molecules. Overall, our results support the bandlike interpretation of charge-carrier transport in rubrene.

  17. Ideal dipole approximation fails to predict electronic coupling and energy transfer between semiconducting single-wall carbon nanotubes.

    PubMed

    Wong, Cathy Y; Curutchet, Carles; Tretiak, Sergei; Scholes, Gregory D

    2009-02-28

    The electronic coupling values and approximate energy transfer rates between semiconductor single-wall carbon nanotubes are calculated using two different approximations, the point dipole approximation and the distributed transition monopole approximation, and the results are compared. It is shown that the point dipole approximation fails dramatically at tube separations typically found in nanotube bundles ( approximately 12-16 A) and that the disagreement persists at large tube separations (>100 A, over ten nanotube diameters). When used in Forster resonance energy transfer theory, the coupling between two point transition dipoles is found to overestimate energy transfer rates. It is concluded that the point dipole approximation is inappropriate for use with elongated systems such as carbon nanotubes and that methods which can account for the shape of the particle are more suitable. PMID:19256589

  18. An evaluation of energy-independent heavy ion transport coefficient approximations

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.

    1988-01-01

    Utilizing a one-dimensional transport theory for heavy ion propagation, evaluations of typical energy-dependent transport coefficient approximations are made by comparing theoretical depth-dose predictions to published experimental values for incident 670 MeV/nucleon Ne-20 beams in water. Results are presented for cases where the input nuclear absorption cross sections, or input fragmentation parameters, or both, are fixed. The lack of fragment charge and mass concentration resulting from the use of Silberberg-Tsao fragmentation parameters continues to be the main source of disagreement between theory and experiment.

  19. Proposal for determining the energy content of gravitational waves by using approximate symmetries of differential equations

    SciTech Connect

    Hussain, Ibrar; Qadir, Asghar; Mahomed, F. M.

    2009-06-15

    Since gravitational wave spacetimes are time-varying vacuum solutions of Einstein's field equations, there is no unambiguous means to define their energy content. However, Weber and Wheeler had demonstrated that they do impart energy to test particles. There have been various proposals to define the energy content, but they have not met with great success. Here we propose a definition using 'slightly broken' Noether symmetries. We check whether this definition is physically acceptable. The procedure adopted is to appeal to 'approximate symmetries' as defined in Lie analysis and use them in the limit of the exact symmetry holding. A problem is noted with the use of the proposal for plane-fronted gravitational waves. To attain a better understanding of the implications of this proposal we also use an artificially constructed time-varying nonvacuum metric and evaluate its Weyl and stress-energy tensors so as to obtain the gravitational and matter components separately and compare them with the energy content obtained by our proposal. The procedure is also used for cylindrical gravitational wave solutions. The usefulness of the definition is demonstrated by the fact that it leads to a result on whether gravitational waves suffer self-damping.

  20. Low-energy parameters of neutron-neutron interaction in the effective-range approximation

    SciTech Connect

    Babenko, V. A.; Petrov, N. M.

    2013-06-15

    The effect of the mass difference between the charged and neutral pions on the low-energy parameters of nucleon-nucleon interaction in the {sup 1}S{sub 0} state is studied in the effective-range approximation. On the basis of experimental values of the singlet parameters of neutron-proton scattering and the experimental value of the virtual-state energy for the neutron-neutron systemin the {sup 1}S{sub 0} state, the following values were obtained for the neutron-neutron scattering length and effective range: a{sub nn} = -16.59(117) fm and r{sub nn} = 2.83(11) fm. The calculated values agree well with present-day experimental results.

  1. Lattice energies of molecular solids from the random phase approximation with singles corrections.

    PubMed

    Klimeš, Jiří

    2016-09-01

    We use the random phase approximation (RPA) method with the singles correlation energy contributions to calculate lattice energies of ten molecular solids. While RPA gives too weak binding, underestimating the reference data by 13.7% on average, much improved results are obtained when the singles are included at the GW singles excitations (GWSE) level, with average absolute difference to the reference data of only 3.7%. Consistently with previous results, we find a very good agreement with the reference data for hydrogen bonded systems, while the binding is too weak for systems where dispersion forces dominate. In fact, the overall accuracy of the RPA+GWSE method is similar to an estimated accuracy of the reference data. PMID:27609003

  2. The frozen orbital approximation for calculating ionization energies with application to propane

    NASA Astrophysics Data System (ADS)

    Müller, Wolfgang; Nager, Christoph; Rosmus, Pavel

    1980-09-01

    In the frozen orbital approximation (FOA), the influence of reorganization on correlation contributions to ionization energies is neglected. It is particularly useful in calculations for large molecules because of the advantage that only one integral transformation is required for the calculation of all ionic states. In connection with the concept of independent orbital correlation contributions, the dimensions of the CI matrices can be drastically reduced. The method is applied to the calculation of the valence ionization energies of propane, and compared to more rigorous ab initio results and a recent calculation in which inner valence shell contributions to electron correlation are neglected. The ordering of the first three ionizations in the photoelectron spectrum of propane, which has not been definitively assigned, is shown to be 2B1(2b1),2A1(6a1) and 2B2(4b2), in agreement with Koopmans' theorem.

  3. Lattice energies of molecular solids from the random phase approximation with singles corrections

    NASA Astrophysics Data System (ADS)

    Klimeš, Jiří

    2016-09-01

    We use the random phase approximation (RPA) method with the singles correlation energy contributions to calculate lattice energies of ten molecular solids. While RPA gives too weak binding, underestimating the reference data by 13.7% on average, much improved results are obtained when the singles are included at the GW singles excitations (GWSE) level, with average absolute difference to the reference data of only 3.7%. Consistently with previous results, we find a very good agreement with the reference data for hydrogen bonded systems, while the binding is too weak for systems where dispersion forces dominate. In fact, the overall accuracy of the RPA+GWSE method is similar to an estimated accuracy of the reference data.

  4. Resonant Interaction, Approximate Symmetry, and Electromagnetic Interaction (EMI) in Low Energy Nuclear Reactions (LENR)

    NASA Astrophysics Data System (ADS)

    Chubb, Scott

    2007-03-01

    Only recently (talk by P.A. Mosier-Boss et al, in this session) has it become possible to trigger high energy particle emission and Excess Heat, on demand, in LENR involving PdD. Also, most nuclear physicists are bothered by the fact that the dominant reaction appears to be related to the least common deuteron(d) fusion reaction,d+d ->α+γ. A clear consensus about the underlying effect has also been illusive. One reason for this involves confusion about the approximate (SU2) symmetry: The fact that all d-d fusion reactions conserve isospin has been widely assumed to mean the dynamics is driven by the strong force interaction (SFI), NOT EMI. Thus, most nuclear physicists assume: 1. EMI is static; 2. Dominant reactions have smallest changes in incident kinetic energy (T); and (because of 2), d+d ->α+γ is suppressed. But this assumes a stronger form of SU2 symmetry than is present; d+d ->α+γ reactions are suppressed not because of large changes in T but because the interaction potential involves EMI, is dynamic (not static), the SFI is static, and because the two incident deuterons must have approximate Bose Exchange symmetry and vanishing spin. A generalization of this idea involves a resonant form of reaction, similar to the de-excitation of an atom. These and related (broken gauge) symmetry EMI effects on LENR are discussed.

  5. Region Graph Partition Function Expansion and Approximate Free Energy Landscapes: Theory and Some Numerical Results

    NASA Astrophysics Data System (ADS)

    Zhou, Haijun; Wang, Chuang

    2012-08-01

    Graphical models for finite-dimensional spin glasses and real-world combinatorial optimization and satisfaction problems usually have an abundant number of short loops. The cluster variation method and its extension, the region graph method, are theoretical approaches for treating the complicated short-loop-induced local correlations. For graphical models represented by non-redundant or redundant region graphs, approximate free energy landscapes are constructed in this paper through the mathematical framework of region graph partition function expansion. Several free energy functionals are obtained, each of which use a set of probability distribution functions or functionals as order parameters. These probability distribution function/functionals are required to satisfy the region graph belief-propagation equation or the region graph survey-propagation equation to ensure vanishing correction contributions of region subgraphs with dangling edges. As a simple application of the general theory, we perform region graph belief-propagation simulations on the square-lattice ferromagnetic Ising model and the Edwards-Anderson model. Considerable improvements over the conventional Bethe-Peierls approximation are achieved. Collective domains of different sizes in the disordered and frustrated square lattice are identified by the message-passing procedure. Such collective domains and the frustrations among them are responsible for the low-temperature glass-like dynamical behaviors of the system.

  6. Development of approximate method to analyze the characteristics of latent heat thermal energy storage system

    SciTech Connect

    Saitoh, T.S.; Hoshi, Akira

    1999-07-01

    Third Conference of the Parties to the U.N. Framework Convention on Climate Change (COP3) held in last December in Kyoto urged the industrialized nation to reduce carbon dioxide (CO{sub 2}) emissions by 5.2 percent (on the average) below 1990 level until the period between 2008 and 2012 (Kyoto protocol). This implies that even for the most advanced countries like the US, Japan, and EU implementation of drastic policies and overcoming many barriers in market should be necessary. One idea which leads to a path of low carbon intensity is to adopt an energy storage concept. One of the reasons that the efficiency of the conventional energy systems has been relatively low is ascribed to lacking of energy storage subsystem. Most of the past energy systems, for example, air-conditioning system, do not have energy storage part and the system usually operates with low energy efficiency. Firstly, the effect of reducing CO{sub 2} emissions was also examined if the LHTES subsystems were incorporated in all the residential and building air-conditioning systems. Another field of application of the LHTES is of course transportation. Future vehicle will be electric or hybrid vehicle. However, these vehicles will need considerable energy for air-conditioning. The LHTES system will provide enough energy for this purpose by storing nighttime electricity or rejected heat from the radiator or motor. Melting and solidification of phase change material (PCM) in a capsule is of practical importance in latent heat thermal energy storage (LHTES) systems which are considered to be very promising to reduce a peak demand of electricity in the summer season and also reduce carbon dioxide (CO{sub 2}) emissions. Two melting modes are involved in melting in capsules. One is close-contact melting between the solid bulk and the capsule wall, and another is natural convection melting in the liquid (melt) region. Close-contact melting processes for a single enclosure have been solved using several

  7. Einstein-Maxwell Dirichlet walls, negative kinetic energies, and the adiabatic approximation for extreme black holes

    NASA Astrophysics Data System (ADS)

    Andrade, Tomás; Kelly, William R.; Marolf, Donald

    2015-10-01

    The gravitational Dirichlet problem—in which the induced metric is fixed on boundaries at finite distance from the bulk—is related to simple notions of UV cutoffs in gauge/gravity duality and appears in discussions relating the low-energy behavior of gravity to fluid dynamics. We study the Einstein-Maxwell version of this problem, in which the induced Maxwell potential on the wall is also fixed. For flat walls in otherwise asymptotically flat spacetimes, we identify a moduli space of Majumdar-Papapetrou-like static solutions parametrized by the location of an extreme black hole relative to the wall. Such solutions may be described as balancing gravitational repulsion from a negative-mass image source against electrostatic attraction to an oppositely signed image charge. Standard techniques for handling divergences yield a moduli space metric with an eigenvalue that becomes negative near the wall, indicating a region of negative kinetic energy and suggesting that the Hamiltonian may be unbounded below. One may also surround the black hole with an additional (roughly spherical) Dirichlet wall to impose a regulator whose physics is more clear. Negative kinetic energies remain, though new terms do appear in the moduli space metric. The regulator dependence indicates that the adiabatic approximation may be ill-defined for classical extreme black holes with Dirichlet walls.

  8. Generalized gradient approximation exchange energy functional with correct asymptotic behavior of the corresponding potential.

    PubMed

    Carmona-Espíndola, Javier; Gázquez, José L; Vela, Alberto; Trickey, S B

    2015-02-01

    A new non-empirical exchange energy functional of the generalized gradient approximation (GGA) type, which gives an exchange potential with the correct asymptotic behavior, is developed and explored. In combination with the Perdew-Burke-Ernzerhof (PBE) correlation energy functional, the new CAP-PBE (CAP stands for correct asymptotic potential) exchange-correlation functional gives heats of formation, ionization potentials, electron affinities, proton affinities, binding energies of weakly interacting systems, barrier heights for hydrogen and non-hydrogen transfer reactions, bond distances, and harmonic frequencies on standard test sets that are fully competitive with those obtained from other GGA-type functionals that do not have the correct asymptotic exchange potential behavior. Distinct from them, the new functional provides important improvements in quantities dependent upon response functions, e.g., static and dynamic polarizabilities and hyperpolarizabilities. CAP combined with the Lee-Yang-Parr correlation functional gives roughly equivalent results. Consideration of the computed dynamical polarizabilities in the context of the broad spectrum of other properties considered tips the balance to the non-empirical CAP-PBE combination. Intriguingly, these improvements arise primarily from improvements in the highest occupied and lowest unoccupied molecular orbitals, and not from shifts in the associated eigenvalues. Those eigenvalues do not change dramatically with respect to eigenvalues from other GGA-type functionals that do not provide the correct asymptotic behavior of the potential. Unexpected behavior of the potential at intermediate distances from the nucleus explains this unexpected result and indicates a clear route for improvement.

  9. Directed energy transfer in films of CdSe quantum dots: beyond the point dipole approximation.

    PubMed

    Zheng, Kaibo; Žídek, Karel; Abdellah, Mohamed; Zhu, Nan; Chábera, Pavel; Lenngren, Nils; Chi, Qijin; Pullerits, Tõnu

    2014-04-30

    Understanding of Förster resonance energy transfer (FRET) in thin films composed of quantum dots (QDs) is of fundamental and technological significance in optimal design of QD based optoelectronic devices. The separation between QDs in the densely packed films is usually smaller than the size of QDs, so that the simple point-dipole approximation, widely used in the conventional approach, can no longer offer quantitative description of the FRET dynamics in such systems. Here, we report the investigations of the FRET dynamics in densely packed films composed of multisized CdSe QDs using ultrafast transient absorption spectroscopy and theoretical modeling. Pairwise interdot transfer time was determined in the range of 1.5 to 2 ns by spectral analyses which enable separation of the FRET contribution from intrinsic exciton decay. A rational model is suggested by taking into account the distribution of the electronic transition densities in the dots and using the film morphology revealed by AFM images. The FRET dynamics predicted by the model are in good quantitative agreement with experimental observations without adjustable parameters. Finally, we use our theoretical model to calculate dynamics of directed energy transfer in ordered multilayer QD films, which we also observe experimentally. The Monte Carlo simulations reveal that three ideal QD monolayers can provide exciton funneling efficiency above 80% from the most distant layer. Thereby, utilization of directed energy transfer can significantly improve light harvesting efficiency of QD devices.

  10. An improved phase shift approach to the energy correction of the infinite order sudden approximation

    NASA Astrophysics Data System (ADS)

    Chang, B.; Eno, L.; Rabitz, H.

    1980-07-01

    A new method is presented for obtaining energy corrections to the infinite order sudden (IOS) approximation by incorporating the effect of the internal molecular Hamiltonian into the IOS wave function. This is done by utilizing the JWKB approximation to transform the Schrödinger equation into a differential equation for the phase. It is found that the internal Hamiltonian generates an effective potential from which a new improved phase shift is obtained. This phase shift is then used in place of the IOS phase shift to generate new transition probabilities. As an illustration the resulting improved phase shift (IPS) method is applied to the Secrest-Johnson model for the collinear collision of an atom and diatom. In the vicinity of the sudden limit, the IPS method gives results for transition probabilities, Pn→n+Δn, in significantly better agreement with the 'exact' close coupling calculations than the IOS method, particularly for large Δn. However, when the IOS results are not even qualitatively correct, the IPS method is unable to satisfactorily provide improvements.

  11. Nuclear energy surfaces at high-spin in the A{approximately}180 mass region

    SciTech Connect

    Chasman, R.R.; Egido, J.L.; Robledo, L.M.

    1995-08-01

    We are studying nuclear energy surfaces at high spin, with an emphasis on very deformed shapes using two complementary methods: (1) the Strutinsky method for making surveys of mass regions and (2) Hartree-Fock calculations using a Gogny interaction to study specific nuclei that appear to be particularly interesting from the Strutinsky method calculations. The great advantage of the Strutinsky method is that one can study the energy surfaces of many nuclides ({approximately}300) with a single set of calculations. Although the Hartree-Fock calculations are quite time-consuming relative to the Strutinsky calculations, they determine the shape at a minimum without being limited to a few deformation modes. We completed a study of {sup 182}Os using both approaches. In our cranked Strutinsky calculations, which incorporate a necking mode deformation in addition to quadrupole and hexadecapole deformations, we found three well-separated, deep, strongly deformed minima. The first is characterized by nuclear shapes with axis ratios of 1.5:1; the second by axis ratios of 2.2:1 and the third by axis ratios of 2.9:1. We also studied this nuclide with the density-dependent Gogny interaction at I = 60 using the Hartree-Fock method and found minima characterized by shapes with axis ratios of 1.5:1 and 2.2:1. A comparison of the shapes at these minima, generated in the two calculations, shows that the necking mode of deformation is extremely useful for generating nuclear shapes at large deformation that minimize the energy. The Hartree-Fock calculations are being extended to larger deformations in order to further explore the energy surface in the region of the 2.9:1 minimum.

  12. Discrete Dipole Approximation for Low-Energy Photoelectron Emission from NaCl Nanoparticles

    SciTech Connect

    Berg, Matthew J.; Wilson, Kevin R.; Sorensen, Chris; Chakrabarti, Amit; Ahmed, Musahid

    2011-09-22

    This work presents a model for the photoemission of electrons from sodium chloride nanoparticles 50-500 nm in size, illuminated by vacuum ultraviolet light with energy ranging from 9.4-10.9 eV. The discrete dipole approximation is used to calculate the electromagnetic field inside the particles, from which the two-dimensional angular distribution of emitted electrons is simulated. The emission is found to favor the particle?s geometrically illuminated side, and this asymmetry is compared to previous measurements performed at the Lawrence Berkeley National Laboratory. By modeling the nanoparticles as spheres, the Berkeley group is able to semi-quantitatively account for the observed asymmetry. Here however, the particles are modeled as cubes, which is closer to their actual shape, and the interaction of an emitted electron with the particle surface is also considered. The end result shows that the emission asymmetry for these low-energy electrons is more sensitive to the particle-surface interaction than to the specific particle shape, i.e., a sphere or cube.

  13. Multi-term approximation to the Boltzmann transport equation for electron energy distribution functions in nitrogen

    NASA Astrophysics Data System (ADS)

    Feng, Yue

    Plasma is currently a hot topic and it has many significant applications due to its composition of both positively and negatively charged particles. The energy distribution function is important in plasma science since it characterizes the ability of the plasma to affect chemical reactions, affect physical outcomes, and drive various applications. The Boltzmann Transport Equation is an important kinetic equation that provides an accurate basis for characterizing the distribution function---both in energy and space. This dissertation research proposes a multi-term approximation to solve the Boltzmann Transport Equation by treating the relaxation process using an expansion of the electron distribution function in Legendre polynomials. The elastic and 29 inelastic cross sections for electron collisions with nitrogen molecules (N2) and singly ionized nitrogen molecules ( N+2 ) have been used in this application of the Boltzmann Transport Equation. Different numerical methods have been considered to compare the results. The numerical methods discussed in this thesis are the implicit time-independent method, the time-dependent Euler method, the time-dependent Runge-Kutta method, and finally the implicit time-dependent relaxation method by generating the 4-way grid with a matrix solver. The results show that the implicit time-dependent relaxation method is the most accurate and stable method for obtaining reliable results. The results were observed to match with the published experimental data rather well.

  14. Free energy of contact formation in proteins: Efficient computation in the elastic network approximation

    NASA Astrophysics Data System (ADS)

    Hamacher, Kay

    2011-07-01

    Biomolecular simulations have become a major tool in understanding biomolecules and their complexes. However, one can typically only investigate a few mutants or scenarios due to the severe computational demands of such simulations, leading to a great interest in method development to overcome this restriction. One way to achieve this is to reduce the complexity of the systems by an approximation of the forces acting upon the constituents of the molecule. The harmonic approximation used in elastic network models simplifies the physical complexity to the most reduced dynamics of these molecular systems. The reduced polymer modeled this way is typically comprised of mass points representing coarse-grained versions of, e.g., amino acids. In this work, we show how the computation of free energy contributions of contacts between two residues within the molecule can be reduced to a simple lookup operation in a precomputable matrix. Being able to compute such contributions is of great importance: protein design or molecular evolution changes introduce perturbations to these pair interactions, so we need to understand their impact. Perturbation to the interactions occurs due to randomized and fixated changes (in molecular evolution) or designed modifications of the protein structures (in bioengineering). These perturbations are modifications in the topology and the strength of the interactions modeled by the elastic network models. We apply the new algorithm to (1) the bovine trypsin inhibitor, a well-known enzyme in biomedicine, and show the connection to folding properties and the hydrophobic collapse hypothesis and (2) the serine proteinase inhibitor CI-2 and show the correlation to Φ values to characterize folding importance. Furthermore, we discuss the computational complexity and show empirical results for the average case, sampled over a library of 77 structurally diverse proteins. We found a relative speedup of up to 10 000-fold for large proteins with respect to

  15. High-intensity interval exercise induces 24-h energy expenditure similar to traditional endurance exercise despite reduced time commitment.

    PubMed

    Skelly, Lauren E; Andrews, Patricia C; Gillen, Jenna B; Martin, Brian J; Percival, Michael E; Gibala, Martin J

    2014-07-01

    Subjects performed high-intensity interval training (HIIT) and continuous moderate-intensity training (END) to evaluate 24-h oxygen consumption. Oxygen consumption during HIIT was lower versus END; however, total oxygen consumption over 24 h was similar. These data demonstrate that HIIT and END induce similar 24-h energy expenditure, which may explain the comparable changes in body composition reported despite lower total training volume and time commitment.

  16. Calculation of intermediate-energy electron-impact ionization of molecular hydrogen and nitrogen using the paraxial approximation

    SciTech Connect

    Serov, Vladislav V.

    2011-12-15

    We have implemented the paraxial approximation followed by the time-dependent Hartree-Fock method with a frozen core for the single impact ionization of atoms and two-atomic molecules. It reduces the original scattering problem to the solution of a five-dimensional time-dependent Schroedinger equation. Using this method, we calculated the multifold differential cross section of the impact single ionization of the helium atom, the hydrogen molecule, and the nitrogen molecule from the impact of intermediate-energy electrons. Our results for He and H{sub 2} are quite close to the experimental data. Surprisingly, for N{sub 2} the agreement is good for the paraxial approximation combined with first Born approximation but worse for pure paraxial approximation, apparently because of the insufficiency of the frozen-core approximation.

  17. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

    SciTech Connect

    Alemi, Mallory; Loring, Roger F.

    2015-06-07

    The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes.

  18. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

    PubMed Central

    Alemi, Mallory; Loring, Roger F.

    2015-01-01

    The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes. PMID:26049437

  19. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    SciTech Connect

    Singh, Kunwar Pal; Arya, Rashmi; Malik, Anil K.

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarized laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.

  20. Rapid approximate calculation of water binding free energies in the whole hydration domain of (bio)macromolecules.

    PubMed

    Reif, Maria M; Zacharias, Martin

    2016-07-01

    The evaluation of water binding free energies around solute molecules is important for the thermodynamic characterization of hydration or association processes. Here, a rapid approximate method to estimate water binding free energies around (bio)macromolecules from a single molecular dynamics simulation is presented. The basic idea is that endpoint free-energy calculation methods are applied and the endpoint quantities are monitored on a three-dimensional grid around the solute. Thus, a gridded map of water binding free energies around the solute is obtained, that is, from a single short simulation, a map of favorable and unfavorable water binding sites can be constructed. Among the employed free-energy calculation methods, approaches involving endpoint information pertaining to actual thermodynamic integration calculations or endpoint information as exploited in the linear interaction energy method were examined. The accuracy of the approximate approaches was evaluated on the hydration of a cage-like molecule representing either a nonpolar, polar, or charged water binding site and on α- and β-cyclodextrin molecules. Among the tested approaches, the linear interaction energy method is considered the most viable approach. Applying the linear interaction energy method on the grid around the solute, a semi-quantitative thermodynamic characterization of hydration around the whole solute is obtained. Disadvantages are the approximate nature of the method and a limited flexibility of the solute. © 2016 Wiley Periodicals, Inc.

  1. Analysis of an Energy Localization Approximation applied to three-dimensional Kinetic Monte Carlo simulations of heteroepitaxial growth

    NASA Astrophysics Data System (ADS)

    Golenbiewski, Kyle L.; Schulze, Tim P.

    2016-10-01

    Heteroepitaxial growth involves depositing one material onto another with a different lattice spacing. This misfit leads to long-range elastic stresses that affect the behavior of the film. Previously, an Energy Localization Approximation was applied to Kinetic Monte Carlo simulations of two-dimensional growth in which the elastic field is updated using a sequence of nested domains. We extend the analysis of this earlier work to a three-dimensional setting and show that while it scales with the increase in dimensionality, a more intuitive Energy Truncation Approximation does not.

  2. Quick benefits of interval training versus continuous training on bone: a dual-energy X-ray absorptiometry comparative study.

    PubMed

    Boudenot, Arnaud; Maurel, Delphine B; Pallu, Stéphane; Ingrand, Isabelle; Boisseau, Nathalie; Jaffré, Christelle; Portier, Hugues

    2015-12-01

    To delay age-related bone loss, physical activity is recommended during growth. However, it is unknown whether interval training is more efficient than continuous training to increase bone mass both quickly and to a greater extent. The aim of this study was to compare the effects of a 10-week interval training regime with a 14-week continuous training regime on bone mineral density (BMD). Forty-four male Wistar rats (8 weeks old) were separated into four groups: control for 10 weeks (C10), control for 14 weeks (C14), moderate interval training for 10 weeks (IT) and moderate continuous training for 14 weeks (CT). Rats were exercised 1 h/day, 5 day/week. Body composition and BMD of the whole body and femur respectively were assessed by dual-energy X-ray absorptiometry at baseline and after training to determine raw gain and weight-normalized BMD gain. Both trained groups had lower weight and fat mass gain when compared to controls. Both trained groups gained more BMD compared to controls when normalized to body weight. Using a 30% shorter training period, the IT group showed more than 20% higher whole body and femur BMD gains compared to the CT. Our data suggest that moderate IT was able to produce faster bone adaptations than moderate CT.

  3. Interval Training.

    ERIC Educational Resources Information Center

    President's Council on Physical Fitness and Sports, Washington, DC.

    Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

  4. A new approach to detect congestive heart failure using Teager energy nonlinear scatter plot of R-R interval series.

    PubMed

    Kamath, Chandrakar

    2012-09-01

    A novel approach to distinguish congestive heart failure (CHF) subjects from healthy subjects is proposed. Heart rate variability (HRV) is impaired in CHF subjects. In this work hypothesizing that capturing moment to moment nonlinear dynamics of HRV will reveal cardiac patterning, we construct the nonlinear scatter plot for Teager energy of R-R interval series. The key feature of Teager energy is that it models the energy of the source that generated the signal rather than the energy of the signal itself. Hence, any deviations in the genesis of HRV, by complex interactions of hemodynamic, electrophysiological, and humoral variables, as well as by the autonomic and central nervous regulations, get manifested in the Teager energy function. Comparison of the Teager energy scatter plot with the second-order difference plot (SODP) for normal and CHF subjects reveals significant differences qualitatively and quantitatively. We introduce the concept of curvilinearity for central tendency measures of the plots and define a radial distance index that reveals the efficacy of the Teager energy scatter plot over SODP in separating CHF subjects from healthy subjects. The k-nearest neighbor classifier with RDI as feature showed almost 100% classification rate.

  5. Casimir bag energy in the stochastic approximation to the pure QCD vacuum

    SciTech Connect

    Fosco, C. D.; Oxman, L. E.

    2007-01-15

    We study the Casimir contribution to the bag energy coming from gluon field fluctuations, within the context of the stochastic vacuum model of pure QCD. After formulating the problem in terms of the generating functional of field strength cumulants, we argue that the resulting predictions about the Casimir energy are compatible with the phenomenologically required bag energy term.

  6. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Narulkar, R.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Agrawal, P. M.; Komanduri, R.

    2009-05-01

    A general method for the development of potential-energy hypersurfaces is presented. The method combines a many-body expansion to represent the potential-energy surface with two-layer neural networks (NN) for each M-body term in the summations. The total number of NNs required is significantly reduced by employing a moiety energy approximation. An algorithm is presented that efficiently adjusts all the coupled NN parameters to the database for the surface. Application of the method to four different systems of increasing complexity shows that the fitting accuracy of the method is good to excellent. For some cases, it exceeds that available by other methods currently in literature. The method is illustrated by fitting large databases of ab initio energies for Sin(n =3,4,…,7) clusters obtained from density functional theory calculations and for vinyl bromide (C2H3Br) and all products for dissociation into six open reaction channels (12 if the reverse reactions are counted as separate open channels) that include C-H and C-Br bond scissions, three-center HBr dissociation, and three-center H2 dissociation. The vinyl bromide database comprises the ab initio energies of 71 969 configurations computed at MP4(SDQ) level with a 6-31G(d,p) basis set for the carbon and hydrogen atoms and Huzinaga's (4333/433/4) basis set augmented with split outer s and p orbitals (43321/4321/4) and a polarization f orbital with an exponent of 0.5 for the bromine atom. It is found that an expansion truncated after the three-body terms is sufficient to fit the Si5 system with a mean absolute testing set error of 5.693×10-4 eV. Expansions truncated after the four-body terms for Sin(n =3,4,5) and Sin(n =3,4,…,7) provide fits whose mean absolute testing set errors are 0.0056 and 0.0212 eV, respectively. For vinyl bromide, a many-body expansion truncated after the four-body terms provides fitting accuracy with mean absolute testing set errors that range between 0.0782 and 0.0808 eV. These

  7. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  8. Exact and approximate expressions of energy generation rates and their impact on the explosion properties of pair instability supernovae

    NASA Astrophysics Data System (ADS)

    Takahashi, Koh; Yoshida, Takashi; Umeda, Hideyuki; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2016-02-01

    Energetics of nuclear reaction is fundamentally important to understand the mechanism of pair instability supernovae (PISNe). Based on the hydrodynamic equations and thermodynamic relations, we derive exact expressions for energy conservation suitable to be solved in simulation. We also show that some formulae commonly used in the literature are obtained as approximations of the exact expressions. We simulate the evolution of very massive stars of ˜100-320 M⊙ with zero- and 1/10 Z⊙, and calculate further explosions as PISNe, applying each of the exact and approximate formulae. The calculations demonstrate that the explosion properties of PISN, such as the mass range, the 56Ni yield, and the explosion energy, are significantly affected by applying the different energy generation rates. We discuss how these results affect the estimate of the PISN detection rate, which depends on the theoretical predictions of such explosion properties.

  9. Self-Interaction Corrected Electronic Structure and Energy Gap of CuAlO2 beyond Local Density Approximation

    NASA Astrophysics Data System (ADS)

    Nakanishi, Akitaka

    2011-05-01

    We implemented a self-interaction correction (SIC) into first-principles calculation code to go beyond local density approximation and applied it to CuAlO2. Our simulation shows that the valence band width calculated within the SIC is narrower than that calculated without the SIC because the SIC makes the d-band potential deeper. The energy gap calculated within the SIC expands and is close to experimental data.

  10. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

    SciTech Connect

    Krause, Katharina; Klopper, Wim

    2013-11-21

    Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn–Sham calculation accounting for spin–orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn–Sham calculations.

  11. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

    NASA Astrophysics Data System (ADS)

    Krause, Katharina; Klopper, Wim

    2013-11-01

    Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn-Sham calculation accounting for spin-orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn-Sham calculations.

  12. Communication: two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation.

    PubMed

    Krause, Katharina; Klopper, Wim

    2013-11-21

    Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn-Sham calculation accounting for spin-orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn-Sham calculations. PMID:24320308

  13. High-intensity interval training, solutions to the programming puzzle. Part II: anaerobic energy, neuromuscular load and practical applications.

    PubMed

    Buchheit, Martin; Laursen, Paul B

    2013-10-01

    High-intensity interval training (HIT) is a well-known, time-efficient training method for improving cardiorespiratory and metabolic function and, in turn, physical performance in athletes. HIT involves repeated short (<45 s) to long (2-4 min) bouts of rather high-intensity exercise interspersed with recovery periods (refer to the previously published first part of this review). While athletes have used 'classical' HIT formats for nearly a century (e.g. repetitions of 30 s of exercise interspersed with 30 s of rest, or 2-4-min interval repetitions ran at high but still submaximal intensities), there is today a surge of research interest focused on examining the effects of short sprints and all-out efforts, both in the field and in the laboratory. Prescription of HIT consists of the manipulation of at least nine variables (e.g. work interval intensity and duration, relief interval intensity and duration, exercise modality, number of repetitions, number of series, between-series recovery duration and intensity); any of which has a likely effect on the acute physiological response. Manipulating HIT appropriately is important, not only with respect to the expected middle- to long-term physiological and performance adaptations, but also to maximize daily and/or weekly training periodization. Cardiopulmonary responses are typically the first variables to consider when programming HIT (refer to Part I). However, anaerobic glycolytic energy contribution and neuromuscular load should also be considered to maximize the training outcome. Contrasting HIT formats that elicit similar (and maximal) cardiorespiratory responses have been associated with distinctly different anaerobic energy contributions. The high locomotor speed/power requirements of HIT (i.e. ≥95 % of the minimal velocity/power that elicits maximal oxygen uptake [v/p(·)VO(2max)] to 100 % of maximal sprinting speed or power) and the accumulation of high-training volumes at high-exercise intensity (runners can

  14. Four-body corrected first Born approximation for single charge exchange at high impact energies

    NASA Astrophysics Data System (ADS)

    Mančev, Ivan

    1995-06-01

    Single electron capture is investigated by means of the four-body boundary corrected first Born approximation (CB1-4B). The "post" form of the transition amplitude for a general heteronuclear case (Zp; e1) + (ZT; e2) → (Zp; e1, e2) + ZT is derived in the form of readily obtainable two-dimensional real integrals. We investigate the sensitivity of the total cross sections to the choice of ground state wave function for helium-like atoms. Also, the influence of non-captured electron on the final results is studied. As an illustration, the CB1-4B method is used to compute the total cross sections for reactions: H(1s) + H(1s) → H-(1s2) + H+, He+(1s) + H(1s) → He(1s2) + H+ and He+(1s) + He+(1s) → He(1s2) + α. The theoretical cross sections are found to be in good agreement with the available experimental data.

  15. Lateral distribution of high energy muons in EAS of sizes Ne approximately equals 10(5) and Ne approximately equals 10(6)

    NASA Technical Reports Server (NTRS)

    Bazhutov, Y. N.; Ermakov, G. G.; Fomin, G. G.; Isaev, V. I.; Jarochkina, Z. V.; Kalmykov, N. N.; Khrenov, B. A.; Khristiansen, G. B.; Kulikov, G. V.; Motova, M. V.

    1985-01-01

    Muon energy spectra and muon lateral distribution in EAS were investigated with the underground magnetic spectrometer working as a part of the extensive air showers (EAS) array. For every registered muon the data on EAS are analyzed and the following EAS parameters are obtained, size N sub e, distance r from the shower axis to muon, age parameter s. The number of muons with energy over some threshold E associated to EAS of fixed parameters are measured, I sub reg. To obtain traditional characteristics, muon flux densities as a function of the distance r and muon energy E, muon lateral distribution and energy spectra are discussed for hadron-nucleus interaction model and composition of primary cosmic rays.

  16. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    PubMed

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-01

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design. PMID:27501066

  17. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    PubMed

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-01

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  18. Fast computation of molecular random phase approximation correlation energies using resolution of the identity and imaginary frequency integration

    NASA Astrophysics Data System (ADS)

    Eshuis, Henk; Yarkony, Julian; Furche, Filipp

    2010-06-01

    The random phase approximation (RPA) is an increasingly popular post-Kohn-Sham correlation method, but its high computational cost has limited molecular applications to systems with few atoms. Here we present an efficient implementation of RPA correlation energies based on a combination of resolution of the identity (RI) and imaginary frequency integration techniques. We show that the RI approximation to four-index electron repulsion integrals leads to a variational upper bound to the exact RPA correlation energy if the Coulomb metric is used. Auxiliary basis sets optimized for second-order Møller-Plesset (MP2) calculations are well suitable for RPA, as is demonstrated for the HEAT [A. Tajti et al., J. Chem. Phys. 121, 11599 (2004)] and MOLEKEL [F. Weigend et al., Chem. Phys. Lett. 294, 143 (1998)] benchmark sets. Using imaginary frequency integration rather than diagonalization to compute the matrix square root necessary for RPA, evaluation of the RPA correlation energy requires O(N4 log N) operations and O(N3) storage only; the price for this dramatic improvement over existing algorithms is a numerical quadrature. We propose a numerical integration scheme that is exact in the two-orbital case and converges exponentially with the number of grid points. For most systems, 30-40 grid points yield μH accuracy in triple zeta basis sets, but much larger grids are necessary for small gap systems. The lowest-order approximation to the present method is a post-Kohn-Sham frequency-domain version of opposite-spin Laplace-transform RI-MP2 [J. Jung et al., Phys. Rev. B 70, 205107 (2004)]. Timings for polyacenes with up to 30 atoms show speed-ups of two orders of magnitude over previous implementations. The present approach makes it possible to routinely compute RPA correlation energies of systems well beyond 100 atoms, as is demonstrated for the octapeptide angiotensin II.

  19. Brownian motors in the low-energy approximation: Classification and properties

    SciTech Connect

    Rozenbaum, V. M.

    2010-04-15

    We classify Brownian motors based on the expansion of their velocity in terms of the reciprocal friction coefficient. The two main classes of motors (with dichotomic fluctuations in homogeneous force and periodic potential energy) are characterized by different analytical dependences of their mean velocity on the spatial and temporal asymmetry coefficients and by different adiabatic limits. The competition between the spatial and temporal asymmetries gives rise to stopping points. The transition through these points can be achieved by varying the asymmetry coefficients, temperature, and other motor parameters, which can be used, for example, for nanoparticle segregation. The proposed classification separates out a new type of motors based on synchronous fluctuations in symmetric potential and applied homogeneous force. As an example of this type of motors, we consider a near-surface motor whose two-dimensional motion (parallel and perpendicular to the substrate plane) results from fluctuations in external force inclined to the surface.

  20. Vibration-translation energy transfer in anharmonic diatomic molecules. I - A comparative evaluation of the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Mckenzie, R. L.

    1975-01-01

    The semiclassical approximation (quantum oscillator, classical path) is applied to anharmonic diatomic oscillators in excited initial states. Multistate numerical solutions giving the vibrational transition probabilities for collinear collisions with an inert atom are compared with equivalent, exact quantum-mechanical calculations. Several symmetrization methods are shown to correlate accurately the predictions of both theories for all initial states, transitions, and molecular types tested, but only if coupling of the oscillator motion and the classical trajectory of the incident particle is considered. In anharmonic heteronuclear molecules, the customary semiclassical method of computing the classical trajectory independently leads to transition probabilities with anomalous low-energy resonances. Proper accounting of the effects of oscillator compression and recoil on the incident particle trajectory removes the anomalies and restores the applicability of the semiclassical approximation.

  1. Multiple scattering of low energy H+ ions in matter: Approximation of mean energy on the Sigmund and Winterbon model

    NASA Astrophysics Data System (ADS)

    Mekhtiche, A.; Khalal-Kouache, K.

    2016-09-01

    In this paper, angular distributions of slow H+ ions transmitted through different targets (Al, Ag and Au) are calculated using the model of Sigmund and Winterbon (SW) in the multiple scattering theory. Valdés and Arista (VA) developed a method extending the SW model by including the effect of energy loss in the calculation of angular distributions of transmitted ions. Another method has been proposed for such calculations: one can consider the SW model by using an average value for the energy of the ions inside the target. In this contribution, a new expression is proposed for the mean energy which gives a better agreement with the VA model than the precedent one at low energy. Different potentials have been considered to describe the interaction projectile-target atom in this study and the new expression is found to be independent of the interaction potential.

  2. Potential-Energy Surfaces, the Born-Oppenheimer Approximations, and the Franck-Condon Principle: Back to the Roots.

    PubMed

    Mustroph, Heinz

    2016-09-01

    The concept of a potential-energy surface (PES) is central to our understanding of spectroscopy, photochemistry, and chemical kinetics. However, the terminology used in connection with the basic approximations is variously, and somewhat confusingly, represented with such phrases as "adiabatic", "Born-Oppenheimer", or "Born-Oppenheimer adiabatic" approximation. Concerning the closely relevant and important Franck-Condon principle (FCP), the IUPAC definition differentiates between a classical and quantum mechanical formulation. Consequently, in many publications we find terms such as "Franck-Condon (excited) state", or a vertical transition to the "Franck-Condon point" with the "Franck-Condon geometry" that relaxes to the excited-state equilibrium geometry. The Born-Oppenheimer approximation and the "classical" model of the Franck-Condon principle are typical examples of misused terms and lax interpretations of the original theories. In this essay, we revisit the original publications of pioneers of the PES concept and the FCP to help stimulate a lively discussion and clearer thinking around these important concepts.

  3. Exchange energy gradients with respect to atomic positions and cell parameters within the Hartree-Fock Gamma-point approximation.

    PubMed

    Weber, Valéry; Daul, Claude; Challacombe, Matt

    2006-06-01

    Recently, linear scaling construction of the periodic exact Hartree-Fock exchange matrix within the Gamma-point approximation has been introduced [J. Chem. Phys. 122, 124105 (2005)]. In this article, a formalism for evaluation of analytical Hartree-Fock exchange energy gradients with respect to atomic positions and cell parameters at the Gamma-point approximation is presented. While the evaluation of exchange gradients with respect to atomic positions is similar to those in the gas phase limit, the gradients with respect to cell parameters involve the accumulation of atomic gradients multiplied by appropriate factors and a modified electron repulsion integral (ERI). This latter integral arises from use of the minimum image convention in the definition of the Gamma-point Hartree-Fock approximation. We demonstrate how this new ERI can be computed with the help of a modified vertical recurrence relation in the frame of the Obara-Saika and Head-Gordon-Pople algorithm. As an illustration, the analytical gradients have been used in conjunction with the QUICCA algorithm [K. Nemeth and M. Challacombe, J. Chem. Phys. 121, 2877 (2004)] to optimize periodic systems at the Hartree-Fock level of theory. PMID:16774396

  4. Exchange energy gradients with respect to atomic positions and cell parameters within the Hartree-Fock Γ-point approximation

    NASA Astrophysics Data System (ADS)

    Weber, Valéry; Daul, Claude; Challacombe, Matt

    2006-06-01

    Recently, linear scaling construction of the periodic exact Hartree-Fock exchange matrix within the Γ-point approximation has been introduced [J. Chem. Phys. 122, 124105 (2005)]. In this article, a formalism for evaluation of analytical Hartree-Fock exchange energy gradients with respect to atomic positions and cell parameters at the Γ-point approximation is presented. While the evaluation of exchange gradients with respect to atomic positions is similar to those in the gas phase limit, the gradients with respect to cell parameters involve the accumulation of atomic gradients multiplied by appropriate factors and a modified electron repulsion integral (ERI). This latter integral arises from use of the minimum image convention in the definition of the Γ-point Hartree-Fock approximation. We demonstrate how this new ERI can be computed with the help of a modified vertical recurrence relation in the frame of the Obara-Saika and Head-Gordon-Pople algorithm. As an illustration, the analytical gradients have been used in conjunction with the QUICCA algorithm [K. Németh and M. Challacombe, J. Chem. Phys. 121, 2877 (2004)] to optimize periodic systems at the Hartree-Fock level of theory.

  5. Approximation of properties of hyperelastic materials with use of energy-based models and biaxial tension data

    NASA Astrophysics Data System (ADS)

    Jamróz, Weronika

    2016-06-01

    The paper shows the way enrgy-based models aproximate mechanical properties of hiperelastic materials. Main goal of research was to create a method of finding a set of material constants that are included in a strain energy function that constitutes a heart of an energy-based model. The most optimal set of material constants determines the best adjustment of a theoretical stress-strain relation to the experimental one. This kind of adjustment enables better prediction of behaviour of a chosen material. In order to obtain more precised solution the approximation was made with use of data obtained in a modern experiment widely describen in [1]. To save computation time main algorithm is based on genetic algorithms.

  6. Calculation of Electrochemical Energy Levels in Water Using the Random Phase Approximation and a Double Hybrid Functional.

    PubMed

    Cheng, Jun; VandeVondele, Joost

    2016-02-26

    Understanding charge transfer at electrochemical interfaces requires consistent treatment of electronic energy levels in solids and in water at the same level of the electronic structure theory. Using density-functional-theory-based molecular dynamics and thermodynamic integration, the free energy levels of six redox couples in water are calculated at the level of the random phase approximation and a double hybrid density functional. The redox levels, together with the water band positions, are aligned against a computational standard hydrogen electrode, allowing for critical analysis of errors compared to the experiment. It is encouraging that both methods offer a good description of the electronic structures of the solutes and water, showing promise for a full treatment of electrochemical interfaces.

  7. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow

    NASA Technical Reports Server (NTRS)

    Shirts, R. B.; Reinhardt, W. P.

    1982-01-01

    Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.

  8. What Dominates the Error in the CaO Diatomic Bond Energy Predicted by Various Approximate Exchange-Correlation Functionals?

    PubMed

    Yu, Haoyu; Truhlar, Donald G

    2014-06-10

    In order to understand what governs the accuracy of approximate exchange-correlation functionals for intrinsically multiconfigurational systems containing metal atoms, the properties of the ground electronic state of CaO have been studied in detail. We first applied the T1, TAE(T), B1, and M diagnostics to CaO and confirmed that CaO is an intrinsically multiconfigurational system. Then, we compared the bond dissociation energies (BDEs) of CaO as calculated by 49 exchange-correlation functionals, three exchange-only functionals, and the HF method. To analyze the error in the BDEs for the various functionals, we decomposed each calculated BDE into four components, in particular the ionization potential, the electron affinity, the atomic excitation energy of the metal cation to prepare the valence state, and the interaction energy between prepared states. We found that the dominant error occurs in the calculated atomic excitation energy of the cation. Third, we compared dipole moments of CaO as calculated by the 53 methods, and we analyzed the dipole moments in terms of partial atomic charges to understand the contribution of ionic bonding and how it is affected by errors in the calculated ionization potential of the metal atom. We then analyzed the dipole moment in terms of the charge distribution among orbitals, and we found that the orbital charge distribution does not correlate well with the difference between the calculated ionization potential and electron affinity. Fourth, we examined the potential curves and internuclear distance dependence of the orbital energies of the lowest-energy CaO singlet and triplet states to analyze the near-degeneracy aspect of the correlation energy. The most important conclusion is that the error tends to be dominated by the error in the relative energies of s and d orbitals in Ca(+), and the most popular density functionals predict this excitation energy poorly. Thus, even if they were to predict the BDE reasonably well, it would

  9. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity. PMID:27530803

  10. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  11. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u‑1 to 450 MeV u‑1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  12. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  13. Approximations for photoelectron scattering

    NASA Astrophysics Data System (ADS)

    Fritzsche, V.

    1989-04-01

    The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.

  14. Validity of the relativistic impulse approximation for elastic proton-nucleus scattering at energies lower than 200 MeV

    SciTech Connect

    Li, Z. P.; Hillhouse, G. C.; Meng, J.

    2008-07-15

    We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.

  15. Spin-unrestricted random-phase approximation with range separation: Benchmark on atomization energies and reaction barrier heights

    SciTech Connect

    Mussard, Bastien; Reinhardt, Peter; Toulouse, Julien; Ángyán, János G.

    2015-04-21

    We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Szabo and Ostlund [J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism, provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse et al., J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.

  16. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  17. Approximating Optimal Behavioural Strategies Down to Rules-of-Thumb: Energy Reserve Changes in Pairs of Social Foragers

    PubMed Central

    Rands, Sean A.

    2011-01-01

    Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938

  18. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Du; Yang, Weitao

    2016-10-01

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and double excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K4), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.

  19. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGESBeta

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  20. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  1. Enforcing the linear behavior of the total energy with hybrid functionals: Implications for charge transfer, interaction energies, and the random-phase approximation

    NASA Astrophysics Data System (ADS)

    Atalla, Viktor; Zhang, Igor Ying; Hofmann, Oliver T.; Ren, Xinguo; Rinke, Patrick; Scheffler, Matthias

    2016-07-01

    We obtain the exchange parameter of hybrid functionals by imposing the fundamental condition of a piecewise linear total energy with respect to electron number. For the Perdew-Burke-Ernzerhof (PBE) hybrid family of exchange-correlation functionals (i.e., for an approximate generalized Kohn-Sham theory) this implies that (i) the highest occupied molecular orbital corresponds to the ionization potential (I ), (ii) the energy of the lowest unoccupied molecular orbital corresponds to the electron affinity (A ), and (iii) the energies of the frontier orbitals are constant as a function of their occupation. In agreement with a previous study [N. Sai et al., Phys. Rev. Lett. 106, 226403 (2011), 10.1103/PhysRevLett.106.226403], we find that these conditions are met for high values of the exact exchange admixture α and illustrate their importance for the tetrathiafulvalene-tetracyanoquinodimethane complex for which standard density functional theory functionals predict artificial electron transfer. We further assess the performance for atomization energies and weak interaction energies. We find that atomization energies are significantly underestimated compared to PBE or PBE0, whereas the description of weak interaction energies improves significantly if a 1 /R6 van der Waals correction scheme is employed.

  2. Electron-Phonon Coupling and Energy Flow in a Simple Metal beyond the Two-Temperature Approximation

    NASA Astrophysics Data System (ADS)

    Waldecker, Lutz; Bertoni, Roman; Ernstorfer, Ralph; Vorberger, Jan

    2016-04-01

    The electron-phonon coupling and the corresponding energy exchange are investigated experimentally and by ab initio theory in nonequilibrium states of the free-electron metal aluminium. The temporal evolution of the atomic mean-squared displacement in laser-excited thin freestanding films is monitored by femtosecond electron diffraction. The electron-phonon coupling strength is obtained for a range of electronic and lattice temperatures from density functional theory molecular dynamics simulations. The electron-phonon coupling parameter extracted from the experimental data in the framework of a two-temperature model (TTM) deviates significantly from the ab initio values. We introduce a nonthermal lattice model (NLM) for describing nonthermal phonon distributions as a sum of thermal distributions of the three phonon branches. The contributions of individual phonon branches to the electron-phonon coupling are considered independently and found to be dominated by longitudinal acoustic phonons. Using all material parameters from first-principles calculations except the phonon-phonon coupling strength, the prediction of the energy transfer from electrons to phonons by the NLM is in excellent agreement with time-resolved diffraction data. Our results suggest that the TTM is insufficient for describing the microscopic energy flow even for simple metals like aluminium and that the determination of the electron-phonon coupling constant from time-resolved experiments by means of the TTM leads to incorrect values. In contrast, the NLM describing transient phonon populations by three parameters appears to be a sufficient model for quantitatively describing electron-lattice equilibration in aluminium. We discuss the general applicability of the NLM and provide a criterion for the suitability of the two-temperature approximation for other metals.

  3. The use of two-stream approximations for the parameterization of solar radiative energy fluxes through vegetation

    SciTech Connect

    Joseph, J.H.; Iaquinta, J.; Pinty, B.

    1996-10-01

    Two-stream approximations have been used widely and for a long time in the field of radiative transfer through vegetation in various contexts and in the last 10 years also to model the hemispheric reflectance of vegetated surfaces in numerical models of the earth-atmosphere system. For a plane-parallel and turbid vegetation medium, the existence of rotational invariance allows the application of a conventional two-stream approximation to the phase function, based on an expansion in Legendre Polynomials. Three conditions have to be fulfilled to make this reduction possible in the case of vegetation. The scattering function of single leaves must be bi-Lambertian, the azimuthal distribution of leaf normals must be uniform, and the azimuthally averaged Leaf Area Normal Distribution (LAND) must be either uniform or planophile. The first and second assumptions have been shown to be acceptable by other researchers and, in fact, are usually assumed explicitly or implicitly when dealing with radiative transfer through canopies. The third one, on the shape of the azimuthally averaged LAND, although investigated before, is subjected to a detailed sensitivity test in this study, using a set of synthetic LAND`s as well as experimental data for 17 plant canopies. It is shown that the radiative energy flux equations are relatively insensitive to the exact form of the LAND. The experimental Ross functions and hemispheric reflectances lie between those for the synthetic cases of planophile and erectophile LAND`s. However, only the uniform and planophile LANDS lead to canopy hemispheric reflectances, which are markedly different from one another. The analytical two-stream solutions for the either the planophile or the uniform LAND cases may be used to model the radiative fluxes through plant canopies in the solar spectral range. The choice between the two for any particular case must be made on the basis of experimental data. 30 refs., 5 figs.

  4. Fourth-grade children's dietary recall accuracy for energy intake at school meals differs by social desirability and body mass index percentile in a study concerning retention interval.

    PubMed

    Guinn, Caroline H; Baxter, Suzanne D; Royer, Julie A; Hardin, James W; Mackelprang, Alyssa J; Smith, Albert F

    2010-05-01

    Data from a study concerning retention interval and school-meal observation on children's dietary recalls were used to investigate relationships of social desirability score (SDS) and body mass index percentile (BMI%) to recall accuracy for energy for observed (n = 327) children, and to reported energy for observed and unobserved (n = 152) children. Report rates (reported/observed) correlated negatively with SDS and BMI%. Correspondence rates (correctly reported/observed) correlated negatively with SDS. Inflation ratios (overreported/observed) correlated negatively with BMI%. The relationship between reported energy and each of SDS and BMI% did not depend on observation status. Studies utilizing children's dietary recalls should assess SDS and BMI%. PMID:20460407

  5. Saddlepoint distribution function approximations in biostatistical inference.

    PubMed

    Kolassa, J E

    2003-01-01

    Applications of saddlepoint approximations to distribution functions are reviewed. Calculations are provided for marginal distributions and conditional distributions. These approximations are applied to problems of testing and generating confidence intervals, particularly in canonical exponential families.

  6. Energy-averaged electron-ion momentum transport cross section in the Born Approximation and Debye-Hückel potential: Comparison with the cut-off theory

    NASA Astrophysics Data System (ADS)

    Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

    2000-02-01

    An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

  7. Energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel potential: Comparison with the cut-off theory

    NASA Astrophysics Data System (ADS)

    Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

    2000-04-01

    An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

  8. Estimating the Gibbs energy of hydration from molecular dynamics trajectories obtained by integral equations of the theory of liquids in the RISM approximation

    NASA Astrophysics Data System (ADS)

    Tikhonov, D. A.; Sobolev, E. V.

    2011-04-01

    A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.

  9. Discussion on the energy content of the galactic dark matter Bose-Einstein condensate halo in the Thomas-Fermi approximation

    SciTech Connect

    De Souza, J.C.C.; Pires, M.O.C. E-mail: marcelo.pires@ufabc.edu.br

    2014-03-01

    We show that the galactic dark matter halo, considered composed of an axionlike particles Bose-Einstein condensate [6] trapped by a self-graviting potential [5], may be stable in the Thomas-Fermi approximation since appropriate choices for the dark matter particle mass and scattering length are made. The demonstration is performed by means of the calculation of the potential, kinetic and self-interaction energy terms of a galactic halo described by a Boehmer-Harko density profile. We discuss the validity of the Thomas-Fermi approximation for the halo system, and show that the kinetic energy contribution is indeed negligible.

  10. Applications of the equivalent cores approximation. The determination of proton affinities and isocyanide-to-nitrile isomerization energies from core binding energies

    SciTech Connect

    Beach, D.B.; Eyermann, C.J.; Smit, S.P.; Xiang, S.F.; Jolly, W.L.

    1984-02-08

    Core binding energies were determined for the following gas-phase molecules: CH/sub 2/CCH/sub 2/, CH/sub 2/CO, BH/sub 3/CO, HNCO, CH/sub 3/CN, CH/sub 3/NC, NH/sub 2/CN, t-BuNC, and C/sub 6/H/sub 5/NC. By use of the equivalent cores approximation, these data and data from the literature were used to calculate the proton affinities of N/sub 2/O, CO/sub 2/, HCCF, NCF, NH/sub 2/CN, CH/sub 2/N/sub 2/, HNCO, CH/sub 2/CO, HN/sub 3/, CH/sub 3/NC, and CH/sub 3/CN with an estimated accuracy of +/-7 kcal mol/sup -1/. By a similar method, the isocyanide-to-nitrile isomerization energies for CH/sub 3/NC, t-BuNC, and C/sub 6/H/sub 5/NC were calculated to be -30, -27 and -28 kcal mol/sup -1/, respectively.

  11. Calculation of the Energy-Band Structure of the Kronig-Penney Model Using the Nearly-Free and Tightly-Bound-Electron Approximations

    ERIC Educational Resources Information Center

    Wetsel, Grover C., Jr.

    1978-01-01

    Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)

  12. Random-phase approximation correlation energies from Lanczos chains and an optimal basis set: Theory and applications to the benzene dimer

    NASA Astrophysics Data System (ADS)

    Rocca, Dario

    2014-05-01

    A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.

  13. Random-phase approximation correlation energies from Lanczos chains and an optimal basis set: theory and applications to the benzene dimer.

    PubMed

    Rocca, Dario

    2014-05-14

    A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.

  14. The diffuse galactic gamma radiation - The Compton contribution and component separation by energy interval and galactic coordinates

    NASA Technical Reports Server (NTRS)

    Kniffen, D. A.; Fichtel, C. E.

    1981-01-01

    The diffuse high-energy galactic gamma radiation to be expected from cosmic ray interactions with matter and photons is considered with particular emphasis on the contribution of Compton radiation from cosmic ray electrons. The intensity, spectrum and spatial distribution of the expected galactic gamma radiation are estimated based on models of the matter, cosmic ray and photon distributions to take into account the contributions of bremsstrahlung, high-energy cosmic-ray nucleon and interstellar matter interactions as well as Compton interactions between cosmic ray electrons and background photons. Results suggest that the Compton gamma ray contribution from cosmic ray electron interactions with galactic visible and infrared photons is substantially larger than previously believed. Analysis of the energy spectra and latitude dependence of the various sources reveals that the Compton radiation, bremsstrahlung and nuclear cosmic ray-matter interaction radiation should be separable, with Compton radiation dominating at energies from 10 to 100 MeV at galactic latitudes greater than several degrees. Results demonstrate the potential of gamma ray observations in studies of galactic structure, cosmic ray electrons and galactic photon density.

  15. On the errors of local density (LDA) and generalized gradient (GGA) approximations to the Kohn-Sham potential and orbital energies.

    PubMed

    Gritsenko, O V; Mentel, Ł M; Baerends, E J

    2016-05-28

    In spite of the high quality of exchange-correlation energies Exc obtained with the generalized gradient approximations (GGAs) of density functional theory, their xc potentials vxc are strongly deficient, yielding upshifts of ca. 5 eV in the orbital energy spectrum (in the order of 50% of high-lying valence orbital energies). The GGAs share this deficiency with the local density approximation (LDA). We argue that this error is not caused by the incorrect long-range asymptotics of vxc or by self-interaction error. It arises from incorrect density dependencies of LDA and GGA exchange functionals leading to incorrect (too repulsive) functional derivatives (i.e., response parts of the potentials). The vxc potential is partitioned into the potential of the xc hole vxchole (twice the xc energy density ϵxc), which determines Exc, and the response potential vresp, which does not contribute to Exc explicitly. The substantial upshift of LDA/GGA orbital energies is due to a too repulsive LDA exchange response potential vxresp (LDA) in the bulk region. Retaining the LDA exchange hole potential plus the B88 gradient correction to it but replacing the response parts of these potentials by the model orbital-dependent response potential vxresp (GLLB) of Gritsenko et al. [Phys. Rev. A 51, 1944 (1995)], which has the proper step-wise form, improves the orbital energies by more than an order of magnitude. Examples are given for the prototype molecules: dihydrogen, dinitrogen, carbon monoxide, ethylene, formaldehyde, and formic acid.

  16. Ensemble v-representable ab initio density-functional calculation of energy and spin in atoms: A test of exchange-correlation approximations

    SciTech Connect

    Kraisler, Eli; Makov, Guy; Kelson, Itzhak

    2010-10-15

    The total energies and the spin states for atoms and their first ions with Z=1-86 are calculated within the the local spin-density approximation (LSDA) and the generalized-gradient approximation (GGA) to the exchange-correlation (xc) energy in density-functional theory. Atoms and ions for which the ground-state density is not pure-state v-representable are treated as ensemble v-representable with fractional occupations of the Kohn-Sham system. A recently developed algorithm which searches over ensemble v-representable densities [E. Kraisler et al., Phys. Rev. A 80, 032115 (2009)] is employed in calculations. It is found that for many atoms, the ionization energies obtained with the GGA are only modestly improved with respect to experimental data, as compared to the LSDA. However, even in those groups of atoms where the improvement is systematic, there remains a non-negligible difference with respect to the experiment. The ab initio electronic configuration in the Kohn-Sham reference system does not always equal the configuration obtained from the spectroscopic term within the independent-electron approximation. It was shown that use of the latter configuration can prevent the energy-minimization process from converging to the global minimum, e.g., in lanthanides. The spin values calculated ab initio fit the experiment for most atoms and are almost unaffected by the choice of the xc functional. Among the systems with incorrectly obtained spin, there exist some cases (e.g., V, Pt) for which the result is found to be stable with respect to small variations in the xc approximation. These findings suggest a necessity for a significant modification of the exchange-correlation functional, probably of a nonlocal nature, to accurately describe such systems.

  17. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation

    NASA Astrophysics Data System (ADS)

    Mohammadpour, Mozhdeh; Jamshidi, Zahra

    2016-05-01

    The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.

  18. Few-particles generation channels in inelastic hadron-nuclear interactions at energy approximately equals 400 GeV

    NASA Technical Reports Server (NTRS)

    Tsomaya, P. V.

    1985-01-01

    The behavior of the few-particles generation channels in interaction of hadrons with nuclei of CH2, Al, Cu and Pb at mean energy 400 GeV was investigated. The values of coherent production cross-sections beta coh at the investigated nuclei are given. A dependence of coherent and noncoherent events is investigated. The results are compared with the simulations on additive quark model (AQM).

  19. Total-energy calculations for crystalline approximants of quasicrystalline structures: Occupation of the centers of the icosahedral units

    NASA Astrophysics Data System (ADS)

    Sikka, S. K.; Sharma, Surinder M.; Chidambaram, R.

    1993-02-01

    Motivated by recent positron-annihilation experiments on quasicrystalline materials, we have investigated whether the centers of packing units [of Mackay's icosahedron for the Al-Mn system and cuboctahedron for the Mg-Zn (Al) system] are empty or filled. Our pseudopotential-based total-energy calculations suggest that the centers are occupied, in agreement with experimental positron-annihilation results. Possible reasons for discrepancies with the diffraction results are discussed.

  20. ENERGY CONSERVATION AND GRAVITY WAVES IN SOUND-PROOF TREATMENTS OF STELLAR INTERIORS. PART I. ANELASTIC APPROXIMATIONS

    SciTech Connect

    Brown, Benjamin P.; Zweibel, Ellen G.; Vasil, Geoffrey M.

    2012-09-10

    Typical flows in stellar interiors are much slower than the speed of sound. To follow the slow evolution of subsonic motions, various sound-proof equations are in wide use, particularly in stellar astrophysical fluid dynamics. These low-Mach number equations include the anelastic equations. Generally, these equations are valid in nearly adiabatically stratified regions like stellar convection zones, but may not be valid in the sub-adiabatic, stably stratified stellar radiative interiors. Understanding the coupling between the convection zone and the radiative interior is a problem of crucial interest and may have strong implications for solar and stellar dynamo theories as the interface between the two, called the tachocline in the Sun, plays a crucial role in many solar dynamo theories. Here, we study the properties of gravity waves in stably stratified atmospheres. In particular, we explore how gravity waves are handled in various sound-proof equations. We find that some anelastic treatments fail to conserve energy in stably stratified atmospheres, instead conserving pseudo-energies that depend on the stratification, and we demonstrate this numerically. One anelastic equation set does conserve energy in all atmospheres and we provide recommendations for converting low-Mach number anelastic codes to this set of equations.

  1. Analytic energy gradients for the coupled-cluster singles and doubles method with the density-fitting approximation

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur; Sherrill, C. David

    2016-05-01

    An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the "gradient terms": computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbital (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C10H22), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.

  2. Efficient implementation of the analytic second derivatives of Hartree-Fock and hybrid DFT energies: a detailed analysis of different approximations

    NASA Astrophysics Data System (ADS)

    Bykov, Dmytro; Petrenko, Taras; Izsák, Róbert; Kossmann, Simone; Becker, Ute; Valeev, Edward; Neese, Frank

    2015-07-01

    In this paper, various implementations of the analytic Hartree-Fock and hybrid density functional energy second derivatives are studied. An approximation-free four-centre implementation is presented, and its accuracy is rigorously analysed in terms of self-consistent field (SCF), coupled-perturbed SCF (CP-SCF) convergence and prescreening criteria. The CP-SCF residual norm convergence threshold turns out to be the most important of these. Final choices of convergence thresholds are made such that an accuracy of the vibrational frequencies of better than 5 cm-1 compared to the numerical noise-free results is obtained, even for the highly sensitive low frequencies (<100-200 cm-1). The effects of the choice of numerical grid for density functional exchange-correlation integrations are studied and various weight derivative schemes are analysed in detail. In the second step of the work, approximations are introduced in order to speed up the computation without compromising its accuracy. To this end, the accuracy and efficiency of the resolution of identity approximation for the Coulomb terms and the semi-numerical chain of spheres approximation to the exchange terms are carefully analysed. It is shown that the largest performance improvements are realised if either Hartree-Fock exchange is absent (pure density functionals) and otherwise, if the exchange terms in the CP-SCF step of the calculation are approximated by the COSX method in conjunction with a small integration grid. Default values for all the involved truncation parameters are suggested. For vancomycine (176 atoms and 3593 basis functions), the RIJCOSX Hessian calculation with the B3LYP functional and the def2-TZVP basis set takes ∼3 days using 16 Intel® Xeon® 2.60GHz processors with the COSX algorithm having a net parallelisation scaling of 11.9 which is at least ∼20 times faster than the calculation without the RIJCOSX approximation.

  3. The Vertical-current Approximation Nonlinear Force-free Field Code—Description, Performance Tests, and Measurements of Magnetic Energies Dissipated in Solar Flares

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2016-06-01

    In this work we provide an updated description of the Vertical-Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code, which is designed to measure the evolution of the potential, non-potential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann, we find agreement in the potential, non-potential, and free energy within a factor of ≲ 1.3, but the Wiegelmann code yields in the average a factor of 2 lower flare energies. The VCA-NLFFF code is found to detect decreases in flare energies in most X, M, and C-class flares. The successful detection of energy decreases during a variety of flares with the VCA-NLFFF code indicates that current-driven twisting and untwisting of the magnetic field is an adequate model to quantify the storage of magnetic energies in active regions and their dissipation during flares. The VCA-NLFFF code is also publicly available in the Solar SoftWare.

  4. Calculation of the energy distribution of a fast electron in a helium beam plasma by numerical methods with substantiation of the multigroup approximation

    NASA Astrophysics Data System (ADS)

    Punkevich, B. S.; Stal, N. L.; Stepanov, B. M.; Khokhlov, V. D.

    The possibility of using the multigroup method to determine the physical properties of a beam plasma is substantiated, and the effectiveness of the application of this method is analyzed. The results obtained are compared with solutions of rigorous steady-state kinetic equations and approximate equations corresponding to a model of continuous slowdown and its variants. It is shown that, in the case of the complete slowdown of a fast electron and all the secondary electrons produced by it in He, 51 percent of the primary-electron energy is expended on the ionization of helium atoms, 16 percent is converted into atom thermal energy, and 33 percent is expended on atom excitation. Of this latter 33 percent, 21 percent is expended on the excitation of energy levels corresponding to optically allowed transitions.

  5. Convergence of the variational parameter without convergence of the energy in Quantum Monte Carlo (QMC) calculations using the Stochastic Gradient Approximation

    NASA Astrophysics Data System (ADS)

    Nissenbaum, Daniel; Lin, Hsin; Barbiellini, Bernardo; Bansil, Arun

    2009-03-01

    To study the performance of the Stochastic Gradient Approximation (SGA) for variational Quantum Monte Carlo methods, we have considered lithium nano-clusters [1] described by Hartree-Fock wavefunctions multiplied by two-body Jastrow factors with a single variational parameter b. Even when the system size increases, we have shown the feasibility of obtaining an accurate value of b that minimizes the energy without an explicit calculation of the energy itself. The present SGA algorithm is so efficient because an analytic gradient formula is used and because the statistical noise in the gradient is smaller than in the energy [2]. Interestingly, in this scheme the absolute value of the gradient is less important than the sign of the gradient. Work supported in part by U.S. DOE. [1] D. Nissenbaum et al., Phys. Rev. B 76, 033412 (2007). [2] A. Harju, J. Low. Temp. Phys. 140, 181 (2005).

  6. On the errors of local density (LDA) and generalized gradient (GGA) approximations to the Kohn-Sham potential and orbital energies

    NASA Astrophysics Data System (ADS)

    Gritsenko, O. V.; Mentel, Ł. M.; Baerends, E. J.

    2016-05-01

    In spite of the high quality of exchange-correlation energies Exc obtained with the generalized gradient approximations (GGAs) of density functional theory, their xc potentials vxc are strongly deficient, yielding upshifts of ca. 5 eV in the orbital energy spectrum (in the order of 50% of high-lying valence orbital energies). The GGAs share this deficiency with the local density approximation (LDA). We argue that this error is not caused by the incorrect long-range asymptotics of vxc or by self-interaction error. It arises from incorrect density dependencies of LDA and GGA exchange functionals leading to incorrect (too repulsive) functional derivatives (i.e., response parts of the potentials). The vxc potential is partitioned into the potential of the xc hole vxchole (twice the xc energy density ɛxc), which determines Exc, and the response potential vresp, which does not contribute to Exc explicitly. The substantial upshift of LDA/GGA orbital energies is due to a too repulsive LDA exchange response potential vxresp L D A in the bulk region. Retaining the LDA exchange hole potential plus the B88 gradient correction to it but replacing the response parts of these potentials by the model orbital-dependent response potential vxresp G L L B of Gritsenko et al. [Phys. Rev. A 51, 1944 (1995)], which has the proper step-wise form, improves the orbital energies by more than an order of magnitude. Examples are given for the prototype molecules: dihydrogen, dinitrogen, carbon monoxide, ethylene, formaldehyde, and formic acid.

  7. Corrigendum to "High-energy limit of quantum electrodynamics beyond Sudakov approximation" [Phys. Lett. B 745 (2015) 69

    NASA Astrophysics Data System (ADS)

    Penin, Alexander A.

    2015-12-01

    There is a sign misprint in the third line of Eq. (7) which should read ϕc (η , ξ) = exp ⁡ [ - xη (η + 2 ξ - 2) ]. In the analysis of the high-order corrections the double-logarithmic contribution due to the soft photon exchange between the soft and external electron lines, Fig. 2(d), has not been taken into account. This contribution results in an additional factor ϕd (η2)ϕd (ξ1) in the integrand of Eq. (6), where ϕd (η) = exp ⁡ [ - x(1 - η) 2 ]. It changes the coefficients of the series (9). The corrected coefficients are listed in a new Table 1. The asymptotic behavior of F1(1) at large x given by Eqs. (10), (11), (12) is modified. The numerical result for the function f (x) = - 3F1(1) is presented in Fig. 3. The function rapidly grows at x ∼ 1 and then monotonically approaches the limit f (∞) = 1.33496 … corresponding to F1(1) (x = ∞) = - 0.444988 … . Thus the power-suppressed amplitude is enhanced by the double-logarithmic corrections at high energy though the enhancement is not as significant as it was suggested by Eqs. (11), (12).

  8. GW approximation study of late transition metal oxides: Spectral function clusters around Fermi energy as the mechanism behind smearing in momentum density

    NASA Astrophysics Data System (ADS)

    Khidzir, S. M.; Ibrahim, K. N.; Wan Abdullah, W. A. T.

    2016-05-01

    Momentum density studies are the key tool in Fermiology in which electronic structure calculations have proven to be the integral underlying methodology. Agreements between experimental techniques such as Compton scattering experiments and conventional density functional calculations for late transition metal oxides (TMOs) prove elusive. In this work, we report improved momentum densities of late TMOs using the GW approximation (GWA) which appears to smear the momentum density creating occupancy above the Fermi break. The smearing is found to be largest for NiO and we will show that it is due to more spectra surrounding the NiO Fermi energy compared to the spectra around the Fermi energies of FeO and CoO. This highlights the importance of the positioning of the Fermi energy and the role played by the self-energy term to broaden the spectra and we elaborate on this point by comparing the GWA momentum densities to their LDA counterparts and conclude that the larger difference at the intermediate level shows that the self-energy has its largest effect in this region. We finally analyzed the quasiparticle renormalization factor and conclude that an increase of electrons in the d-orbital from FeO to NiO plays a vital role in changing the magnitude of electron correlation via the self-energy.

  9. Generalized Gradient Approximations of the Noninteracting Kinetic Energy from the Semiclassical Atom Theory: Rationalization of the Accuracy of the Frozen Density Embedding Theory for Nonbonded Interactions.

    PubMed

    Laricchia, S; Fabiano, E; Constantin, L A; Della Sala, F

    2011-08-01

    We present a new class of noninteracting kinetic energy (KE) functionals, derived from the semiclassical-atom theory. These functionals are constructed using the link between exchange and kinetic energies and employ a generalized gradient approximation (GGA) for the enhancement factor, namely, the Perdew-Burke-Ernzerhof (PBE) one. Two of them, named APBEK and revAPBEK, recover in the slowly varying density limit the modified second-order gradient (MGE2) expansion of the KE, which is valid for a neutral atom with a large number of electrons. APBEK contains no empirical parameters, while revAPBEK has one empirical parameter derived from exchange energies, which leads to a higher degree of nonlocality. The other two functionals, APBEKint and revAPBEKint, modify the APBEK and revAPBEK enhancement factors, respectively, to recover the second-order gradient expansion (GE2) of the homogeneous electron gas. We first benchmarked the total KE of atoms/ions and jellium spheres/surfaces: we found that functionals based on the MGE2 are as accurate as the current state-of-the-art KE functionals, containing several empirical parameters. Then, we verified the accuracy of these new functionals in the context of the frozen density embedding (FDE) theory. We benchmarked 20 systems with nonbonded interactions, and we considered embedding errors in the energy and density. We found that all of the PBE-like functionals give accurate and similar embedded densities, but the revAPBEK and revAPBEKint functionals have a significant superior accuracy for the embedded energy, outperforming the current state-of-the-art GGA approaches. While the revAPBEK functional is more accurate than revAPBEKint, APBEKint is better than APBEK. To rationalize this performance, we introduce the reduced-gradient decomposition of the nonadditive kinetic energy, and we discuss how systems with different interactions can be described with the same functional form.

  10. Phenomenological applications of rational approximants

    NASA Astrophysics Data System (ADS)

    Gonzàlez-Solís, Sergi; Masjuan, Pere

    2016-08-01

    We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

  11. Tests of Exchange-Correlation Functional Approximations Against Reliable Experimental Data for Average Bond Energies of 3d Transition Metal Compounds.

    PubMed

    Zhang, Wenjing; Truhlar, Donald G; Tang, Mingsheng

    2013-09-10

    One of the greatest challenges for the theoretical study of transition-metal-containing compounds is the treatment of intrinsically multiconfigurational atoms and molecules, which require a multireference (MR) treatment in wave function theory. The accuracy of density functional theory for such systems is still being explored. Here, we continue that exploration by presenting the predictions of 42 exchange-correlation (xc) functionals of 11 types [local spin density approximation (LSDA), generalized gradient approximation (GGA), nonseparable gradient approximation (NGA), global-hybrid GGA, meta-GGA, meta-NGA, global-hybrid meta-GGA, range-separated hybrid GGA, range-separated hybrid meta-GGA, range-separated hybrid meta-NGA, and DFT augmented with molecular mechanics damped dispersion (DFT-D)]. DFT-D is tested both for Grimme's DFT-D3(BJ) model with Becke-Johnson damping and for ωB97X-D, which has the empirical atom-atom dispersion parametrized by Chai and Head-Gordon. The Hartree-Fock (HF) method has also been included because it can be viewed as a functional with 100% HF exchange and no correlation. These methods are tested against a database including 70 first-transition-row (3d) transition-metal-containing molecules (19 single-reference molecules and 51 MR molecules), all of which have estimated experimental uncertainties equal to or less than 2.0 kcal/mol in the heat of formation. We analyze the accuracy in terms of the atomization energy per bond instead of the enthalpy of formation of the molecule because it allows us to test electronic energies without the possibility of cancellation of errors in electronic energies with errors in vibrational energies. All the density functional and HF wave functions have been optimized to a stable solution, in which the spatial symmetry is allowed to be broken to minimize the energy to a stable solution. We find that τ-HCTHhyb has the smallest mean unsigned error (MUE) in average bond energy, in particular 2.5 kcal

  12. Coulombic free energy of polymeric nucleic acid: low- and high-salt analytical approximations for the cylindrical Poisson-Boltzmann model.

    PubMed

    Shkel, Irina A

    2010-08-26

    An accurate analytical expression for the Coulombic free energy of DNA as a function of salt concentration ([salt]) is essential in applications to nucleic acid (NA) processes. The cylindrical model of DNA and the nonlinear Poisson-Boltzmann (NLPB) equation for ions in solution are among the simplest approaches capable of describing Coulombic interactions of NA and salt ions and of providing analytical expressions for thermodynamic quantities. Three approximations for Coulombic free energy G(u,infinity)(coul) of a polymeric nucleic acid are derived and compared with the numerical solution in a wide experimental range of 1:1 [salt] from 0.01 to 2 M. Two are obtained from the two asymptotic solutions of the cylindrical NLPB equation in the high-[salt] and low-[salt] limits: these are sufficient to determine G(u,infinity)(coul) of double-stranded (ds) DNA with 1% and of single-stranded (ss) DNA with 3% accuracy at any [salt]. The third approximation is experimentally motivated Taylor series up to the quadratic term in ln[salt] in the vicinity of the reference [salt] 0.15 M. This expression with three numerical coefficients (Coulombic free energy and its first and second derivatives at 0.15 M) predicts dependence of G(u,infinity)(coul) on [salt] within 2% of the numerical solution from 0.01 to 1 M for ss (a = 7 A, b = 3.4 A) and ds (a = 10 A, b = 1.7 A) DNA. Comparison of cylindrical free energy with that calculated for the all-atom structural model of linear B-DNA shows that the cylindrical model is completely sufficient above 0.01 M of 1:1 [salt]. The choice of two cylindrical parameters, the distance of closest approach of ion to cylinder axis (radius) a and the average axial charge separation b, is discussed in application to all-atom numerical calculations and analysis of experiment. Further development of analytical expression for Coulombic free energy with thermodynamic approaches accounting for ionic correlations and specific effects is suggested.

  13. Laplacian-Level Kinetic Energy Approximations Based on the Fourth-Order Gradient Expansion: Global Assessment and Application to the Subsystem Formulation of Density Functional Theory.

    PubMed

    Laricchia, Savio; Constantin, Lucian A; Fabiano, Eduardo; Della Sala, Fabio

    2014-01-14

    We tested Laplacian-level meta-generalized gradient approximation (meta-GGA) noninteracting kinetic energy functionals based on the fourth-order gradient expansion (GE4). We considered several well-known Laplacian-level meta-GGAs from the literature (bare GE4, modified GE4, and the MGGA functional of Perdew and Constantin (Phys. Rev. B 2007,75, 155109)), as well as two newly designed Laplacian-level kinetic energy functionals (L0.4 and L0.6). First, a general assessment of the different functionals is performed to test them for model systems (one-electron densities, Hooke's atom, and different jellium systems) and atomic and molecular kinetic energies as well as for their behavior with respect to density-scaling transformations. Finally, we assessed, for the first time, the performance of the different functionals for subsystem density functional theory (DFT) calculations on noncovalently interacting systems. We found that the different Laplacian-level meta-GGA kinetic functionals may improve the description of different properties of electronic systems, but no clear overall advantage is found over the best GGA functionals. Concerning the subsystem DFT calculations, the here-proposed L0.4 and L0.6 kinetic energy functionals are competitive with state-of-the-art GGAs, whereas all other Laplacian-level functionals fail badly. The performance of the Laplacian-level functionals is rationalized thanks to a two-dimensional reduced-gradient and reduced-Laplacian decomposition of the nonadditive kinetic energy density.

  14. Efficient modal-expansion discrete-dipole approximation: Application to the simulation of optical extinction and electron energy-loss spectroscopies

    NASA Astrophysics Data System (ADS)

    Guillaume, Stéphane-Olivier; de Abajo, F. Javier García; Henrard, Luc

    2013-12-01

    An efficient procedure is introduced for the calculation of the optical response of individual and coupled metallic nanoparticles in the framework of the discrete-dipole approximation (DDA). We introduce a modal expansion in the basis set of discrete dipoles and show that a few suitably selected modes are sufficient to compute optical spectra with reasonable accuracy, thus reducing the required numerical effort relative to other DDA approaches. Our method offers a natural framework for the study of localized plasmon modes, including plasmon hybridization. As a proof of concept, we investigate optical extinction and electron energy-loss spectra of monomers, dimers, and quadrumers formed by flat silver squares. This method should find application to the previously prohibited simulation of complex particle arrays.

  15. Analytical approximations for x-ray cross sections III

    SciTech Connect

    Biggs, F; Lighthill, R

    1988-08-01

    This report updates our previous work that provided analytical approximations to cross sections for both photoelectric absorption of photons by atoms and incoherent scattering of photons by atoms. This representation is convenient for use in programmable calculators and in computer programs to evaluate these cross sections numerically. The results apply to atoms of atomic numbers between 1 and 100 and for photon energiesgreater than or equal to10 eV. The photoelectric cross sections are again approximated by four-term polynomials in reciprocal powers of the photon energy. There are now more fitting intervals, however, than were used previously. The incoherent-scattering cross sections are based on the Klein-Nishina relation, but use simpler approximate equations for efficient computer evaluation. We describe the averaging scheme for applying these atomic results to any composite material. The fitting coefficients are included in tables, and the cross sections are shown graphically. 100 graphs, 1 tab.

  16. Frequentist evaluation of intervals estimated for a binomial parameter and for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, Robert D.; Hymes, Kathryn E.; Tucker, Jordan

    2010-01-01

    Confidence intervals for a binomial parameter or for the ratio of Poisson means are commonly desired in high energy physics (HEP) applications such as measuring a detection efficiency or branching ratio. Due to the discreteness of the data, in both of these problems the frequentist coverage probability unfortunately depends on the unknown parameter. Trade-offs among desiderata have led to numerous sets of intervals in the statistics literature, while in HEP one typically encounters only the classic intervals of Clopper-Pearson (central intervals with no undercoverage but substantial over-coverage) or a few approximate methods which perform rather poorly. If strict coverage is relaxed, some sort of averaging is needed to compare intervals. In most of the statistics literature, this averaging is over different values of the unknown parameter, which is conceptually problematic from the frequentist point of view in which the unknown parameter is typically fixed. In contrast, we perform an (unconditional) average over observed data in the ratio-of-Poisson-means problem. If strict conditional coverage is desired, we recommend Clopper-Pearson intervals and intervals from inverting the likelihood ratio test (for central and non-central intervals, respectively). Lancaster's mid- P modification to either provides excellent unconditional average coverage in the ratio-of-Poisson-means problem.

  17. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  18. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  19. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

    PubMed Central

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-01-01

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

  20. Low-energy dipole excitations in neon isotopes and N=16 isotones within the quasiparticle random-phase approximation and the Gogny force

    SciTech Connect

    Martini, M.; Peru, S.; Dupuis, M.

    2011-03-15

    Low-energy dipole excitations in neon isotopes and N=16 isotones are calculated with a fully consistent axially-symmetric-deformed quasiparticle random phase approximation (QRPA) approach based on Hartree-Fock-Bogolyubov (HFB) states. The same Gogny D1S effective force has been used both in HFB and QRPA calculations. The microscopical structure of these low-lying resonances, as well as the behavior of proton and neutron transition densities, are investigated in order to determine the isoscalar or isovector nature of the excitations. It is found that the N=16 isotones {sup 24}O, {sup 26}Ne, {sup 28}Mg, and {sup 30}Si are characterized by a similar behavior. The occupation of the 2s{sub 1/2} neutron orbit turns out to be crucial, leading to nontrivial transition densities and to small but finite collectivity. Some low-lying dipole excitations of {sup 28}Ne and {sup 30}Ne, characterized by transitions involving the {nu}1d{sub 3/2} state, present a more collective behavior and isoscalar transition densities. A collective proton low-lying excitation is identified in the {sup 18}Ne nucleus.

  1. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  2. Approximation of Laws

    NASA Astrophysics Data System (ADS)

    Niiniluoto, Ilkka

    2014-03-01

    Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

  3. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

    NASA Astrophysics Data System (ADS)

    Engel, D.; Klews, M.; Wunner, G.

    2009-02-01

    We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

  4. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

    NASA Astrophysics Data System (ADS)

    Engel, D.; Klews, M.; Wunner, G.

    2009-02-01

    We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

  5. High resolution time interval counter

    DOEpatents

    Condreva, Kenneth J.

    1994-01-01

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

  6. High resolution time interval counter

    DOEpatents

    Condreva, K.J.

    1994-07-26

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

  7. Sparse pseudospectral approximation method

    NASA Astrophysics Data System (ADS)

    Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.

    2012-07-01

    Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.

  8. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  9. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  10. A measurement of the cosmic ray elements C to Fe in the two energy intervals 0.5-2.0 GeV/n and 20-60 GeV/n

    NASA Technical Reports Server (NTRS)

    Derrickson, J. H.; Parnell, T. A.; Watts, J. W.; Gregory, J. C.

    1985-01-01

    The study of the cosmic ray abundances beyond 20 GeV/n provides additional information on the propagation and containment of the cosmic rays in the galaxy. Since the average amount of interstellar material traversed by cosmic rays decreases as its energy increases, the source composition undergoes less distortion in this higher energy region. However, data over a wide energy range is necessary to study propagation parameters. Some measurements of some of the primary cosmic ray abundance ratios at both low (near 2 GeV/n) and high (above 20 GeV/n) energy are given and compared to the predictions of the leaky box mode. In particular, the integrated values (above 23.7 GeV/n) for the more abundant cosmic ray elements in the interval C through Fe and the differential flux for carbon, oxygen, and the Ne, Mg, Si group are presented. Limited statistics prevented the inclusion of the odd Z elements.

  11. New approximations to the energy dependences of the total cross sections for the proton-induced fission of {sup 197}Au, {sup 203}Tl, {sup nat}Pb, and {sup 209}Bi nuclei

    SciTech Connect

    Vaishnene, L. A.; Vovchenko, V. G.; Gavrikov, Yu. A.; Murzin, V. I.; Polyakov, V. V.; Tverskoi, M. G.; Fedorov, O. Ya.; Chestnov, Yu. A. Shvedchikov, A. V.; Shchetkovskii, A. I.

    2011-01-15

    The total cross sections for {sup 197}Au and {sup 203}Tl fission induced by protons of energy varied from about 200 to 1000 MeV with a step of about 100 MeV are measured. New approximations to the energy dependences of the cross sections for the proton-induced fission of {sup 197}Au, {sup 203}Tl, natPb, and {sup 209}Bi nuclei are presented and discussed. For all of these nuclei, exponential functions are used as approximations.

  12. Interval arithmetic operations for uncertainty analysis with correlated interval variables

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu

    2016-08-01

    A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

  13. Multichannel interval timer (MINT)

    SciTech Connect

    Kimball, K.B.

    1982-06-01

    A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

  14. Interval-valued random functions and the kriging of intervals

    SciTech Connect

    Diamond, P.

    1988-04-01

    Estimation procedures using data that include some values known to lie within certain intervals are usually regarded as problems of constrained optimization. A different approach is used here. Intervals are treated as elements of a positive cone, obeying the arithmetic of interval analysis, and positive interval-valued random functions are discussed. A kriging formalism for interval-valued data is developed. It provides estimates that are themselves intervals. In this context, the condition that kriging weights be positive is seen to arise in a natural way. A numerical example is given, and the extension to universal kriging is sketched.

  15. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  16. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  17. Volatility return intervals analysis of the Japanese market

    NASA Astrophysics Data System (ADS)

    Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

    2008-03-01

    We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

  18. Metabolic response of different high-intensity aerobic interval exercise protocols.

    PubMed

    Gosselin, Luc E; Kozlowski, Karl F; DeVinney-Boymel, Lee; Hambridge, Caitlin

    2012-10-01

    Although high-intensity sprint interval training (SIT) employing the Wingate protocol results in significant physiological adaptations, it is conducted at supramaximal intensity and is potentially unsafe for sedentary middle-aged adults. We therefore evaluated the metabolic and cardiovascular response in healthy young individuals performing 4 high-intensity (~90% VO2max) aerobic interval training (HIT) protocols with similar total work output but different work-to-rest ratio. Eight young physically active subjects participated in 5 different bouts of exercise over a 3-week period. Protocol 1 consisted of 20-minute continuous exercise at approximately 70% of VO2max, whereas protocols 2-5 were interval based with a work-active rest duration (in seconds) of 30/30, 60/30, 90/30, and 60/60, respectively. Each interval protocol resulted in approximately 10 minutes of exercise at a workload corresponding to approximately 90% VO2max, but differed in the total rest duration. The 90/30 HIT protocol resulted in the highest VO2, HR, rating of perceived exertion, and blood lactate, whereas the 30/30 protocol resulted in the lowest of these parameters. The total caloric energy expenditure was lowest in the 90/30 and 60/30 protocols (~150 kcal), whereas the other 3 protocols did not differ (~195 kcal) from one another. The immediate postexercise blood pressure response was similar across all the protocols. These finding indicate that HIT performed at approximately 90% of VO2max is no more physiologically taxing than is steady-state exercise conducted at 70% VO2max, but the response during HIT is influenced by the work-to-rest ratio. This interval protocol may be used as an alternative approach to steady-state exercise training but with less time commitment.

  19. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  20. Optimal approximate doubles

    NASA Astrophysics Data System (ADS)

    Huang, Siendong

    2009-11-01

    The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.

  1. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  2. Dependence of the specific energy of the β/α interface in the VT6 titanium alloy on the heating temperature in the interval 600-975°C

    NASA Astrophysics Data System (ADS)

    Murzinova, M. A.; Zherebtsov, S. V.; Salishchev, G. A.

    2016-04-01

    The specific energy of interphase boundaries is an important characteristic of multiphase alloys, because it determines in many respects their microstructural stability and properties during processing and exploitation. We analyze variation of the specific energy of the β/α interface in the VT6 titanium alloy at temperatures from 600 to 975°C. Analysis is based on the model of a ledge interphase boundary and the method for computation of its energy developed by van der Merwe and Shiflet [33, 34]. Calculations use the available results of measurements of the lattice parameters of phases in the indicated temperature interval and their chemical composition. In addition, we take into account the experimental data and the results of simulation of the effect of temperature and phase composition on the elastic moduli of the α and β phases in titanium alloys. It is shown that when the temperature decreases from 975 to 600°C, the specific energy of the β/α interface increases from 0.15 to 0.24 J/m2. The main contribution to the interfacial energy (about 85%) comes from edge dislocations accommodating the misfit in direction [0001]α || [110]β. The energy associated with the accommodation of the misfit in directions {[ {bar 2110} ]_α }| {{{[ {1bar 11} ]}_β }} . and {[ {0bar 110} ]_α }| {{{[ {bar 112} ]}_β }} . due to the formation of "ledges" and tilt misfit dislocations is low and increases slightly upon cooling.

  3. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  4. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  5. Intervality and coherence in complex networks.

    PubMed

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A

    2016-06-01

    Food webs-networks of predators and prey-have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis-usually identified with a "niche" dimension-has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks. PMID:27368797

  6. Intervality and coherence in complex networks

    NASA Astrophysics Data System (ADS)

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

    2016-06-01

    Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  7. Intervality and coherence in complex networks.

    PubMed

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A

    2016-06-01

    Food webs-networks of predators and prey-have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis-usually identified with a "niche" dimension-has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  8. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  9. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  10. Fermion tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Majhi, Bibhas Ranjan

    2009-02-01

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  11. Analysis of experimental data on doublet neutron-deuteron scattering at energies below the deuteron-breakup threshold on the basis of the pole approximation of the effective-range function

    SciTech Connect

    Babenko, V. A.; Petrov, N. M.

    2008-01-15

    On the basis of the Bargmann representation of the S matrix, the pole approximation is obtained for the effective-range function k cot {delta}. This approximation is optimal for describing the neutron-deuteron system in the doublet spin state. The values of r{sub 0} = 412.469 fm and v{sub 2} = -35 495.62 fm{sup 3} for the doublet low-energy parameters of neutron-deuteron scattering and the value of D = 172.678 fm{sup 2} for the respective pole parameter are deduced by using experimental results for the triton binding energy E{sub T}, the doublet neutron-deuteron scattering length a{sub 2}, and van Oers-Seagrave phase shifts at energies below the deuteron-breakup threshold. With these parameters, the pole approximation of the effective-range function provides a highly precise description (the relative error does not exceed 1%) of the doublet phase shift for neutron-deuteron scattering at energies below the deuteron-breakup threshold. Physical properties of the triton in the ground (T) and virtual (v) states are calculated. The results are B{sub v} = 0.608 MeV for the virtuallevel position and C{sub T}{sup 2} = 2.866 and C{sub v}{sup 2} = 0.0586 for the dimensionless asymptotic normalization constants. It is shown that, in the Whiting-Fuda approximation, the values of physical quantities characterizing the triton virtual state are determined to a high precision by one parameter, the doublet neutron-deuteron scattering length a{sub 2}. The effective triton radii in the ground ({rho}{sub T} = 1.711 fm) and virtual ({rho}{sub v} = 74.184 fm) states are calculated for the first time.

  12. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  13. An Assessment of Density Functional Methods for Potential Energy Curves of Nonbonded Interactions: The XYG3 and B97-D Approximations.

    PubMed

    Vázquez-Mayagoitia, Álvaro; Sherrill, C David; Aprà, Edoardo; Sumpter, Bobby G

    2010-03-01

    A recently proposed double-hybrid functional called XYG3 and a semilocal GGA functional (B97-D) with a semiempirical correction for van der Waals interactions have been applied to study the potential energy curves along the dissociation coordinates of weakly bound pairs of molecules governed by London dispersion and induced dipole forces. Molecules treated in this work were the parallel sandwich, T-shaped, and parallel-displaced benzene dimer, (C6H6)2; hydrogen sulfide and benzene, H2S·C6H6; methane and benzene, CH4·C6H6; the methane dimer, (CH4)2; and the pyridine dimer, (C5H5N)2. We compared the potential energy curves of these functionals with previously published benchmarks at the coupled cluster singles, doubles, and perturbative triplets [CCSD(T)] complete-basis-set limit. Both functionals, XYG3 and B97-D, exhibited very good performance, reproducing accurate energies for equilibrium distances and a smooth behavior along the dissociation coordinate. Overall, we found an agreement within a few tenths of one kcal mol(-1) with the CCSD(T) results across the potential energy curves.

  14. Location and energy of interstitial hydrogen in the 1/1 approximant W-TiZrNi of the icosahedral TiZrNi quasicrystal: Rietveld refinement of x-ray and neutron diffraction data and density-functional calculations

    SciTech Connect

    Hennig, R. G.; Majzoub, E. H.; Kelton, K. F.

    2006-05-01

    We present a determination of hydrogen sites in the 1/1 approximant structure of the icosahedral TiZrNi quasicrystal. A Rietveld refinement of neutron and x-ray diffraction data determines the locations of interstitial hydrogen atoms. Density-functional methods calculate the energy of hydrogen on all possible interstitial sites. The Rietveld refinement shows that the hydrogen atoms are preferentially located in the two lowest-energy sites. The filling of the remaining hydrogen sites is dominated by the repulsive hydrogen-hydrogen interaction at short distances.

  15. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Thompson, Bruce

    2007-01-01

    The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

  16. Approximate recalculation of the {alpha}(Z{alpha}){sup 5} contribution to the self-energy effect on hydrogenic states with a multipole expansion

    SciTech Connect

    Zamastil, J.

    2013-01-15

    A contribution of virtual electron states with large wave numbers to the self-energy of an electron bound in the weak Coulomb field is analyzed in the context of the evaluation method suggested in the previous paper. The contribution is of the order {alpha}(Z{alpha}){sup 5}. The same value for this contribution is found here as the one found in the previous calculations using different evaluation methods. When we add the remaining terms of the order {alpha}(Z{alpha}){sup 5} to the calculation of the self-energy effect in hydrogen-like ions presented in the previous paper we find a very good agreement with numerical evaluations. The relative difference between present and numerical evaluations ranges from 2 parts in 10{sup 6} for Z=1 up to 6 parts in 10{sup 4} for Z=10. - Highlights: Black-Right-Pointing-Pointer The complete terms of the order {alpha}(Z{alpha}){sup 5} are identified. Black-Right-Pointing-Pointer The accuracy of the result for the ground state of the hydrogen is 2 ppm. Black-Right-Pointing-Pointer The separation into the low and high energy regions and their matching is avoided.

  17. Exclusive experiment on nuclei with backward emitted particles by electron-nucleus collision in {approximately} 10 GeV energy range

    SciTech Connect

    Saito, T.; Takagi, F.

    1994-04-01

    Since the evidence of strong cross section in proton-nucleus backward scattering was presented in the early of 1970 years, this phenomena have been interested from the point of view to be related to information on the short range correlation between nucleons or on high momentum components of the wave function of the nucleus. In the analysis of the first experiment on protons from the carbon target under bombardment by 1.5-5.7 GeV protons, indications are found of an effect analogous to scaling in high-energy interactions of elementary particles with protons. Moreover it is found that the function f(p{sup 2})/{sigma}{sub tot}, which describes the spectra of the protons and deuterons emitted backward from nuclei in the laboratory system, does not depend on the energy and the type of the incident particle or on the atomic number of the target nucleus. In the following experiments the spectra of the protons emitted from the nuclei C, Al, Ti, Cu, Cd and Pb were measured in the inclusive reactions with incident particles of negative pions (1.55-6.2 GeV/c) and protons (6.2-9.0 GeV/C). The cross section f is described by f = E/p{sup 2} d{sup 2}{sigma}/dpd{Omega} = C exp ({minus}Bp{sup 2}), where p is the momentum of hadron. The function f depends linearly on the atomic weight A of the target nuclei. The slope parameter B is independent of the target nucleus and of the sort and energy of the bombarding particles. The invariant cross section {rho} = f/{sigma}{sub tot} is also described by exponential A{sub 0} exp ({minus}A{sub 1p}{sup 2}), where p becomes independent of energy at initial particle energies {ge} 1.5 GeV for C nucleus and {ge} 5 GeV for the heaviest of the investigated Pb nuclei.

  18. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  19. Approximate strip exchanging.

    PubMed

    Roy, Swapnoneel; Thakur, Ashok Kumar

    2008-01-01

    Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.

  20. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  1. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  2. Teaching Confidence Intervals Using Simulation

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2008-01-01

    Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

  3. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  4. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  5. Approximate Bayesian multibody tracking.

    PubMed

    Lanz, Oswald

    2006-09-01

    Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

  6. Subjective probability intervals: how to reduce overconfidence by interval evaluation.

    PubMed

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-11-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity. The authors successfully apply a method for adaptive adjustment of probability intervals as a debiasing tool and discuss a tentative explanation in terms of a naive sampling model. According to this view, people report their experiences accurately, but they are naive in that they treat both sample proportion and sample dispersion as unbiased estimators, yielding small bias in probability evaluation but strong bias in interval production. PMID:15521796

  7. Three Approximate Entropies

    NASA Astrophysics Data System (ADS)

    Lubkin, Elihu

    2002-04-01

    In 1993,(E. & T. Lubkin, Int.J.Theor.Phys. 32), 993 (1993) we gave exact mean trace of squared density matrix P for 3 models of an n-dimensional part of an nK-dimensional pure state. Models named: random nK ket (Haar); pure-pure driven by random Hamiltonian (Gauss); Gauss with n,K coupling reset small (weak). Neglecting higher powers of P gives the approximation: ln(n)- defines deficit = (n - 1)/2 which yields deficits, Haar: n((n+K)/(nK+1) - 1)/2 = ( n - 1/n - 1/K + 1/nnK )/2K + Order(f[n] / KKK); Gauss: (n/2)( (n+K)/(nK+1) + 2(nK+1-n-K)/nK(nK+1)(nK+3)) - 1/2 = ( n - 1/n - 1/K + 2/nK - 1/nnK )/2K + Order( f[n]/KKK ); weak: (n/2)(2(K+n)/((K+1)(n+1))) - 1/2 = (n/(n+1))(1 + (n-1)/K - (n-1)/KK + Order(f[n]/KKK)) - 1/2 [unreliable]. These would stay poor even as Karrow∞ unless deficit << 1 bit. Haar and Gauss come out good, but weak has too large a deficit. Though many authors (beginning with Don Page(D.N.Page, PRL 71), 1291 (1993)) have found the exact for Haar, I haven't yet seen exact for Gauss or for weak.

  8. Approximation by hinge functions

    SciTech Connect

    Faber, V.

    1997-05-01

    Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

  9. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: Linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation

    NASA Astrophysics Data System (ADS)

    Isegawa, Miho; Truhlar, Donald G.

    2013-04-01

    Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

  10. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  11. Simple Interval Timers for Microcomputers.

    ERIC Educational Resources Information Center

    McInerney, M.; Burgess, G.

    1985-01-01

    Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

  12. Multiple interval mapping for quantitative trait loci.

    PubMed Central

    Kao, C H; Zeng, Z B; Teasdale, R D

    1999-01-01

    A new statistical method for mapping quantitative trait loci (QTL), called multiple interval mapping (MIM), is presented. It uses multiple marker intervals simultaneously to fit multiple putative QTL directly in the model for mapping QTL. The MIM model is based on Cockerham's model for interpreting genetic parameters and the method of maximum likelihood for estimating genetic parameters. With the MIM approach, the precision and power of QTL mapping could be improved. Also, epistasis between QTL, genotypic values of individuals, and heritabilities of quantitative traits can be readily estimated and analyzed. Using the MIM model, a stepwise selection procedure with likelihood ratio test statistic as a criterion is proposed to identify QTL. This MIM method was applied to a mapping data set of radiata pine on three traits: brown cone number, tree diameter, and branch quality scores. Based on the MIM result, seven, six, and five QTL were detected for the three traits, respectively. The detected QTL individually contributed from approximately 1 to 27% of the total genetic variation. Significant epistasis between four pairs of QTL in two traits was detected, and the four pairs of QTL contributed approximately 10.38 and 14.14% of the total genetic variation. The asymptotic variances of QTL positions and effects were also provided to construct the confidence intervals. The estimated heritabilities were 0.5606, 0.5226, and 0. 3630 for the three traits, respectively. With the estimated QTL effects and positions, the best strategy of marker-assisted selection for trait improvement for a specific purpose and requirement can be explored. The MIM FORTRAN program is available on the worldwide web (http://www.stat.sinica.edu.tw/chkao/). PMID:10388834

  13. van der Waals Interaction Energies of Small Fragments of P, As, Sb, S, Se, and Te: Comparison of Complete Basis Set Limit CCSD(T) and DFT with Approximate Dispersion.

    PubMed

    Brndiar, Ján; Štich, Ivan

    2012-07-10

    Interaction energies of small model van der Waals fragments of group VA (P, As, Sb) and group VIA (S, Se, Te) are calculated using the complete basis set CCSD(T) method and compared to density functional results with approximate treatment of dispersion interaction using vdW-DF- and DFT-D-types of theories. These simple systems show surprising diversity of electronic properties ranging from more "metallic" to more "insulator" like, a property which needs to be captured in the approximate methods. While none of the standard approximate DFT theories provides an entirely satisfactory description of all the systems, we identify the most reliable approaches of each type. In addition, we show that results can be further tuned to chemical accuracy. In vdW-DF theory, guided by physical insights and the availability of quasi-exact CCSD(T) results, we supply the missing parts of correlation by matching an appropriate hybrid/semilocal exchange-correlation functional to describe short-/medium-range correlations accurately. In the DFT-D-type of theories, we reparametrize the empirical dispersion term. Since for such an accurate treatment benchmark calculations are needed, which typically is feasible only for a finite cluster, we argue that the cluster based model of the exchange-correlation hole is transferrable also to extended systems with vdW dispersion interactions.

  14. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  15. The Role of Higher Harmonics In Musical Interval Perception

    NASA Astrophysics Data System (ADS)

    Krantz, Richard; Douthett, Jack

    2011-10-01

    Using an alternative parameterization of the roughness curve we make direct use of critical band results to investigate the role of higher harmonics on the perception of tonal consonance. We scale the spectral amplitudes in the complex home tone and complex interval tone to simulate acoustic signals of constant energy. Our analysis reveals that even with a relatively small addition of higher harmonics the perfect fifth emerges as a consonant interval with more, musically important, just intervals emerging as consonant as more and more energy is shifted into higher frequencies.

  16. Partitioned-Interval Quantum Optical Communications Receiver

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.

    2013-01-01

    The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

  17. IONIS: Approximate atomic photoionization intensities

    NASA Astrophysics Data System (ADS)

    Heinäsmäki, Sami

    2012-02-01

    A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a

  18. Subjective Probability Intervals: How to Reduce Overconfidence by Interval Evaluation

    ERIC Educational Resources Information Center

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-01-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost…

  19. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    PubMed Central

    Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

    2016-01-01

    For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

  20. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  1. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings.

  2. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings. PMID:26303026

  3. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  4. Frankenstein's glue: transition functions for approximate solutions

    NASA Astrophysics Data System (ADS)

    Yunes, Nicolás

    2007-09-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.

  5. Semiclassics beyond the diagonal approximation

    NASA Astrophysics Data System (ADS)

    Turek, Marko

    2004-05-01

    The statistical properties of the energy spectrum of classically chaotic closed quantum systems are the central subject of this thesis. It has been conjectured by O.Bohigas, M.-J.Giannoni and C.Schmit that the spectral statistics of chaotic systems is universal and can be described by random-matrix theory. This conjecture has been confirmed in many experiments and numerical studies but a formal proof is still lacking. In this thesis we present a semiclassical evaluation of the spectral form factor which goes beyond M.V.Berry's diagonal approximation. To this end we extend a method developed by M.Sieber and K.Richter for a specific system: the motion of a particle on a two-dimensional surface of constant negative curvature. In particular we prove that these semiclassical methods reproduce the random-matrix theory predictions for the next to leading order correction also for a much wider class of systems, namely non-uniformly hyperbolic systems with f>2 degrees of freedom. We achieve this result by extending the configuration-space approach of M.Sieber and K.Richter to a canonically invariant phase-space approach.

  6. Validity of the site-averaging approximation for modeling the dissociative chemisorption of H{sub 2} on Cu(111) surface: A quantum dynamics study on two potential energy surfaces

    SciTech Connect

    Liu, Tianhui; Fu, Bina E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H. E-mail: zhangdh@dicp.ac.cn

    2014-11-21

    A new finding of the site-averaging approximation was recently reported on the dissociative chemisorption of the HCl/DCl+Au(111) surface reaction [T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 139, 184705 (2013); T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 140, 144701 (2014)]. Here, in order to investigate the dependence of new site-averaging approximation on the initial vibrational state of H{sub 2} as well as the PES for the dissociative chemisorption of H{sub 2} on Cu(111) surface at normal incidence, we carried out six-dimensional quantum dynamics calculations using the initial state-selected time-dependent wave packet approach, with H{sub 2} initially in its ground vibrational state and the first vibrational excited state. The corresponding four-dimensional site-specific dissociation probabilities are also calculated with H{sub 2} fixed at bridge, center, and top sites. These calculations are all performed based on two different potential energy surfaces (PESs). It is found that the site-averaging dissociation probability over 15 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability for H{sub 2} (v = 0) and (v = 1) on the two PESs.

  7. Chemical ordering of Co and Ni in a W-(AlCoNi) crystalline approximant related to Al-Co-Ni decagonal quasicrystals studied by atomic resolution energy-dispersive X-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Yasuhara, Akira; Hiraga, Kenji

    2015-01-01

    A W-(AlCoNi) crystalline approximant, which is closely related to Al-Co-Ni decagonal quasicrystals, in an Al72.5Co20Ni7.5 alloy has been studied by atomic resolution energy-dispersive X-ray spectroscopy (EDS), in an instrument attached to a spherical aberration (Cs)-corrected scanning transmission electron microscope. On high-resolution EDS maps of Co and Ni elements, obtained by integrating many sets of EDS data taken from undamaged areas, chemical ordering of Co and Ni is clearly detected. In the structure of the W-(AlCoNi) phase, consisting of arrangements of transition-metal (TM) atoms located at vertices of pentagonal tilings and pentagonal arrangements of mixed sites (MSs) of TM and Al atoms, Co atoms occupy the TM atom positions with the pentagonal tiling and Ni is enriched in part of the pentagonal arrangements of MSs.

  8. Symmetric approximations of the Navier-Stokes equations

    SciTech Connect

    Kobel'kov, G M

    2002-08-31

    A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as {epsilon}{yields}0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.

  9. Symmetric approximations of the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Kobel'kov, G. M.

    2002-08-01

    A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as \\varepsilon\\to0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.

  10. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1996-01-01

    Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  11. Successive intervals analysis of preference measures in a health status index.

    PubMed Central

    Blischke, W R; Bush, J W; Kaplan, R M

    1975-01-01

    The method of successive intervals, a procedure for obtaining equal intervals from category data, is applied to social preference data for a health status index. Several innovations are employed, including an approximate analysis of variance test for determining whether the intervals are of equal width, a regression model for estimating the width of the end intervals in finite scales, and a transformation to equalize interval widths and estimate item locations on the new scale. A computer program has been developed to process large data sets with a larger number of categories than previous programs. PMID:1219005

  12. Second derivatives for approximate spin projection methods

    SciTech Connect

    Thompson, Lee M.; Hratchian, Hrant P.

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

  13. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  14. Aspects of three field approximations: Darwin, frozen, EMPULSE

    SciTech Connect

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-05-25

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability.

  15. Hubbard-U corrected Hamiltonians for non-self-consistent random-phase approximation total-energy calculations: A study of ZnS, TiO2, and NiO

    NASA Astrophysics Data System (ADS)

    Patrick, Christopher E.; Thygesen, Kristian S.

    2016-01-01

    In non-self-consistent calculations of the total energy within the random-phase approximation (RPA) for electronic correlation, it is necessary to choose a single-particle Hamiltonian whose solutions are used to construct the electronic density and noninteracting response function. Here we investigate the effect of including a Hubbard-U term in this single-particle Hamiltonian, to better describe the on-site correlation of 3 d electrons in the transition metal compounds ZnS, TiO2, and NiO. We find that the RPA lattice constants are essentially independent of U , despite large changes in the underlying electronic structure. We further demonstrate that the non-self-consistent RPA total energies of these materials have minima at nonzero U . Our RPA calculations find the rutile phase of TiO2 to be more stable than anatase independent of U , a result which is consistent with experiments and qualitatively different from that found from calculations employing U -corrected (semi)local functionals. However we also find that the +U term cannot be used to correct the RPA's poor description of the heat of formation of NiO.

  16. Approximating Functions with Exponential Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2005-01-01

    The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

  17. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  18. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  19. An Event Restriction Interval Theory of Tense

    ERIC Educational Resources Information Center

    Beamer, Brandon Robert

    2012-01-01

    This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

  20. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  1. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  2. Chaotic dynamics from interspike intervals.

    PubMed

    Pavlov, A N; Sosnovtseva, O V; Mosekilde, E; Anishchenko, V S

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation. PMID:11308739

  3. Chaotic dynamics from interspike intervals

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sosnovtseva, Olga V.; Mosekilde, Erik; Anishchenko, Vadim S.

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation.

  4. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  5. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  6. Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

    2006-01-01

    Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

  7. Postexercise Hypotension After Continuous, Aerobic Interval, and Sprint Interval Exercise.

    PubMed

    Angadi, Siddhartha S; Bhammar, Dharini M; Gaesser, Glenn A

    2015-10-01

    We examined the effects of 3 exercise bouts, differing markedly in intensity, on postexercise hypotension (PEH). Eleven young adults (age: 24.6 ± 3.7 years) completed 4 randomly assigned experimental conditions: (a) control, (b) 30-minute steady-state exercise (SSE) at 75-80% maximum heart rate (HRmax), (4) aerobic interval exercise (AIE): four 4-minute bouts at 90-95% HRmax, separated by 3 minutes of active recovery, and (d) sprint interval exercise (SIE): six 30-second Wingate sprints, separated by 4 minutes of active recovery. Exercise was performed on a cycle ergometer. Blood pressure (BP) was measured before exercise and every 15-minute postexercise for 3 hours. Linear mixed models were used to compare BP between trials. During the 3-hour postexercise, systolic BP (SBP) was lower (p < 0.001) after AIE (118 ± 10 mm Hg), SSE (121 ± 10 mm Hg), and SIE (121 ± 11 mm Hg) compared with control (124 ± 8 mm Hg). Diastolic BP (DBP) was also lower (p < 0.001) after AIE (66 ± 7 mm Hg), SSE (69 ± 6 mm Hg), and SIE (68 ± 8 mm Hg) compared with control (71 ± 7 mm Hg). Only AIE resulted in sustained (>2 hours) PEH, with SBP (120 ± 9 mm Hg) and DBP (68 ± 7 mm Hg) during the third-hour postexercise being lower (p ≤ 0.05) than control (124 ± 8 and 70 ± 7 mm Hg). Although all exercise bouts produced similar reductions in BP at 1-hour postexercise, the duration of PEH was greatest after AIE.

  8. Spectrally Invariant Approximation within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  9. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  10. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  11. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  12. Virial expansion coefficients in the harmonic approximation.

    PubMed

    Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S

    2012-08-01

    The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730

  13. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    SciTech Connect

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

  14. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  15. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  16. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  17. Scaling of light and dark time intervals.

    PubMed

    Marinova, J

    1978-01-01

    Scaling of light and dark time intervals of 0.1 to 1.1 s is performed by the mehtod of magnitude estimation with respect to a given standard. The standards differ in duration and type (light and dark). The light intervals are subjectively estimated as longer than the dark ones. The relation between the mean interval estimations and their magnitude is linear for both light and dark intervals.

  18. Permutations and topological entropy for interval maps

    NASA Astrophysics Data System (ADS)

    Misiurewicz, Michal

    2003-05-01

    Recently Bandt, Keller and Pompe (2002 Entropy of interval maps via permutations Nonlinearity 15 1595-602) introduced a method of computing the entropy of piecewise monotone interval maps by counting permutations exhibited by initial pieces of orbits. We show that for topological entropy this method does not work for arbitrary continuous interval maps. We also show that for piecewise monotone interval maps topological entropy can be computed by counting permutations exhibited by periodic orbits.

  19. Digital redesign of uncertain interval systems based on time-response resemblance via particle swarm optimization.

    PubMed

    Hsu, Chen-Chien; Lin, Geng-Yu

    2009-07-01

    In this paper, a particle swarm optimization (PSO) based approach is proposed to derive an optimal digital controller for redesigned digital systems having an interval plant based on time-response resemblance of the closed-loop systems. Because of difficulties in obtaining time-response envelopes for interval systems, the design problem is formulated as an optimization problem of a cost function in terms of aggregated deviation between the step responses corresponding to extremal energies of the redesigned digital system and those of their continuous counterpart. A proposed evolutionary framework incorporating three PSOs is subsequently presented to minimize the cost function to derive an optimal set of parameters for the digital controller, so that step response sequences corresponding to the extremal sequence energy of the redesigned digital system suitably approximate those of their continuous counterpart under the perturbation of the uncertain plant parameters. Computer simulations have shown that redesigned digital systems incorporating the PSO-derived digital controllers have better system performance than those using conventional open-loop discretization methods.

  20. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  1. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  2. Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis

    PubMed Central

    Nie, Yuxin; Zou, Jianzhou; Liang, Yixiu; Shen, Bo; Liu, Zhonghua; Cao, Xuesen; Chen, Xiaohong; Ding, Xiaoqiang

    2016-01-01

    Background Sudden cardiac death is one of the primary causes of mortality in chronic hemodialysis (HD) patients. Prolonged QTc interval is associated with increased rate of sudden cardiac death. The aim of this article is to assess the abnormalities found in electrocardiograms (ECGs), and to explore factors that can influence the QTc interval. Methods A total of 141 conventional HD patients were enrolled in this study. ECG tests were conducted on each patient before a single dialysis session and 15 minutes before the end of dialysis session (at peak stress). Echocardiography tests were conducted before dialysis session began. Blood samples were drawn by phlebotomy immediately before and after the dialysis session. Results Before dialysis, 93.62% of the patients were in sinus rhythm, and approximately 65% of the patients showed a prolonged QTc interval (i.e., a QTc interval above 440 ms in males and above 460ms in females). A comparison of ECG parameters before dialysis and at peak stress showed increases in heart rate (77.45±11.92 vs. 80.38±14.65 bpm, p = 0.001) and QTc interval (460.05±24.53 ms vs. 470.93±24.92 ms, p<0.001). After dividing patients into two groups according to the QTc interval, lower pre-dialysis serum concentrations of potassium (K+), calcium (Ca2+), phosphorus, calcium* phosphorus (Ca*P), and higher concentrations of plasma brain natriuretic peptide (BNP) were found in the group with prolonged QTc intervals. Patients in this group also had a larger left atrial diameter (LAD) and a thicker interventricular septum, and they tended to be older than patients in the other group. Then patients were divided into two groups according to ΔQTc (ΔQTc = QTc peak-stress- QTc pre-HD). When analyzing the patients whose QTc intervals were longer at peak stress than before HD, we found that they had higher concentrations of Ca2+ and P5+ and lower concentrations of K+, ferritin, UA, and BNP. They were also more likely to be female. In addition, more cardiac

  3. Restricted Interval Guelph permeameter: Theory andapplication

    SciTech Connect

    Freifeld, Barry M.; Oldenburg, Curtis M.

    2003-06-19

    A constant head permeameter system has been developed for use in small diameter boreholes with any orientation. It is based upon the original Guelph permeameter concept of using a Mariotte siphon reservoir to control the applied head. The new tool, called a Restricted Interval Guelph (RIG) permeameter uses either a single pneumatic packer or straddle packer to restrict the area through which water is allowed to flow so that the borehole wetted area is independent of the applied head. The RIG permeameter has been used at Yucca Mountain, Nevada, in the nonwelded rhyolitic Paintbrush Tuff. Analysis of the acquired data is based upon saturated-unsaturated flow theory that relies upon the quasi-linear approximation to estimate field-saturated hydraulic conductivity (Kfs) and the a parameter (sorptive number) of the exponential relative hydraulic conductivity pressure head relationship. These results are compared with a numerical model based upon the solution of the Richards equation using a van Genuchten capillary pressure-saturation formulation. The numerical model incorporates laboratory capillary pressure versus saturation functions measured from cores taken from nearby boreholes. Comparison between the analytical and numerical approaches shows that the simple analytic model is valid for analyzing the data collected. Sensitivity analysis performed with the numerical model shows that the RIG permeameter is an effective tool for estimating permeability and sorptive number for the nonwelded Paintbrush Tuff.

  4. Confidence intervals for ATR performance metrics

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.

    2001-08-01

    This paper describes confidence interval (CI) estimators (CIEs) for the metrics used to assess sensor exploitation algorithm (or ATR) performance. For the discrete distributions, small sample sizes and extreme outcomes encountered within ATR testing, the commonly used CIEs have limited accuracy. This paper makes available CIEs that are accurate over all conditions of interest to the ATR community. The approach is to search for CIs using an integration of the Bayesian posterior (IBP) to measure alpha (chance of the CI not containing the true value). The CIEs provided include proportion estimates based on Binomial distributions and rate estimates based on Poisson distributions. One or two-sided CIs may be selected. For two-sided CIEs, either minimal length, balanced tail probabilities, or balanced width may be selected. The CIEs' accuracies are reported based on a Monte Carlo validated integration of the posterior probability distribution and compared to the Normal approximation and `exact' (Clopper- Pearson) methods. While the IBP methods are accurate throughout, the conventional methods may realize alphas with substantial error (up to 50%). This translates to 10 to 15% error in the CI widths or to requiring 10 to 15% more samples for a given confidence level.

  5. Dissimilar Physiological and Perceptual Responses Between Sprint Interval Training and High-Intensity Interval Training.

    PubMed

    Wood, Kimberly M; Olive, Brittany; LaValle, Kaylyn; Thompson, Heather; Greer, Kevin; Astorino, Todd A

    2016-01-01

    High-intensity interval training (HIIT) and sprint interval training (SIT) elicit similar cardiovascular and metabolic adaptations vs. endurance training. No study, however, has investigated acute physiological changes during HIIT vs. SIT. This study compared acute changes in heart rate (HR), blood lactate concentration (BLa), oxygen uptake (VO2), affect, and rating of perceived exertion (RPE) during HIIT and SIT. Active adults (4 women and 8 men, age = 24.2 ± 6.2 years) initially performed a VO2max test to determine workload for both sessions on the cycle ergometer, whose order was randomized. Sprint interval training consisted of 8 bouts of 30 seconds of all-out cycling at 130% of maximum Watts (Wmax). High-intensity interval training consisted of eight 60-second bouts at 85% Wmax. Heart rate, VO2, BLa, affect, and RPE were continuously assessed throughout exercise. Repeated-measures analysis of variance revealed a significant difference between HIIT and SIT for VO2 (p < 0.001), HR (p < 0.001), RPE (p = 0.03), and BLa (p = 0.049). Conversely, there was no significant difference between regimens for affect (p = 0.12). Energy expenditure was significantly higher (p = 0.02) in HIIT (209.3 ± 40.3 kcal) vs. SIT (193.5 ± 39.6 kcal). During HIIT, subjects burned significantly more calories and reported lower perceived exertion than SIT. The higher VO2 and lower BLa in HIIT vs. SIT reflected dissimilar metabolic perturbation between regimens, which may elicit unique long-term adaptations. If an individual is seeking to burn slightly more calories, maintain a higher oxygen uptake, and perceive less exertion during exercise, HIIT is the recommended routine.

  6. Dissimilar Physiological and Perceptual Responses Between Sprint Interval Training and High-Intensity Interval Training.

    PubMed

    Wood, Kimberly M; Olive, Brittany; LaValle, Kaylyn; Thompson, Heather; Greer, Kevin; Astorino, Todd A

    2016-01-01

    High-intensity interval training (HIIT) and sprint interval training (SIT) elicit similar cardiovascular and metabolic adaptations vs. endurance training. No study, however, has investigated acute physiological changes during HIIT vs. SIT. This study compared acute changes in heart rate (HR), blood lactate concentration (BLa), oxygen uptake (VO2), affect, and rating of perceived exertion (RPE) during HIIT and SIT. Active adults (4 women and 8 men, age = 24.2 ± 6.2 years) initially performed a VO2max test to determine workload for both sessions on the cycle ergometer, whose order was randomized. Sprint interval training consisted of 8 bouts of 30 seconds of all-out cycling at 130% of maximum Watts (Wmax). High-intensity interval training consisted of eight 60-second bouts at 85% Wmax. Heart rate, VO2, BLa, affect, and RPE were continuously assessed throughout exercise. Repeated-measures analysis of variance revealed a significant difference between HIIT and SIT for VO2 (p < 0.001), HR (p < 0.001), RPE (p = 0.03), and BLa (p = 0.049). Conversely, there was no significant difference between regimens for affect (p = 0.12). Energy expenditure was significantly higher (p = 0.02) in HIIT (209.3 ± 40.3 kcal) vs. SIT (193.5 ± 39.6 kcal). During HIIT, subjects burned significantly more calories and reported lower perceived exertion than SIT. The higher VO2 and lower BLa in HIIT vs. SIT reflected dissimilar metabolic perturbation between regimens, which may elicit unique long-term adaptations. If an individual is seeking to burn slightly more calories, maintain a higher oxygen uptake, and perceive less exertion during exercise, HIIT is the recommended routine. PMID:26691413

  7. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  8. Interval approach to braneworld gravity

    NASA Astrophysics Data System (ADS)

    Carena, Marcela; Lykken, Joseph; Park, Minjoon

    2005-10-01

    Gravity in five-dimensional braneworld backgrounds may exhibit extra scalar degrees of freedom with problematic features, including kinetic ghosts and strong coupling behavior. Analysis of such effects is hampered by the standard heuristic approaches to braneworld gravity, which use the equations of motion as the starting point, supplemented by orbifold projections and junction conditions. Here we develop the interval approach to braneworld gravity, which begins with an action principle. This shows how to implement general covariance, despite allowing metric fluctuations that do not vanish on the boundaries. We reproduce simple Z2 orbifolds of gravity, even though in this approach we never perform a Z2 projection. We introduce a family of “straight gauges”, which are bulk coordinate systems in which both branes appear as straight slices in a single coordinate patch. Straight gauges are extremely useful for analyzing metric fluctuations in braneworld models. By explicit gauge-fixing, we show that a general AdS5/AdS4 setup with two branes has at most a radion, but no physical “brane-bending” modes.

  9. Collective pairing Hamiltonian in the GCM approximation

    NASA Astrophysics Data System (ADS)

    Góźdź, A.; Pomorski, K.; Brack, M.; Werner, E.

    1985-08-01

    Using the generator coordinate method and the gaussian overlap approximation we derived the collective Schrödinger-type equation starting from a microscopic single-particle plus pairing hamiltonian for one kind of particle. The BCS wave function was used as the generator function. The pairing energy-gap parameter Δ and the gauge transformation anglewere taken as the generator coordinates. Numerical results have been obtained for the full and the mean-field pairing hamiltonians and compared with the cranking estimates. A significant role played by the zero-point energy correction in the collective pairing potential is found. The ground-state energy dependence on the pairing strength agrees very well with the exact solution of the Richardson model for a set of equidistant doubly-degenerate single-particle levels.

  10. Analysing organic transistors based on interface approximation

    SciTech Connect

    Akiyama, Yuto; Mori, Takehiko

    2014-01-15

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

  11. Physiological responses to an acute bout of sprint interval cycling.

    PubMed

    Freese, Eric C; Gist, Nicholas H; Cureton, Kirk J

    2013-10-01

    Sprint interval training has been shown to improve skeletal muscle oxidative capacity, V[Combining Dot Above]O2max, and health outcomes. However, the acute physiological responses to 4-7 maximal effort intervals have not been determined. To determine the V[Combining Dot Above]O2, cardiorespiratory responses, and energy expenditure during an acute bout of sprint interval cycling (SIC), health, college-aged subjects, 6 men and 6 women, completed 2 SIC sessions with at least 7 days between trials. Sprint interval cycling was performed on a cycle ergometer and involved a 5-minute warm-up followed by four 30-second all-out sprints with 4-minute active recovery. Peak oxygen uptake (ml·kg·min) during the 4 sprints were 35.3 ± 8.2, 38.8 ± 10.1, 38.8 ± 10.6, and 36.8 ± 9.3, and peak heart rate (b·min) were 164 ± 17, 172 ± 10, 177 ± 12, and 175 ± 22. We conclude that an acute bout of SIC elicits submaximal V[Combining Dot Above]O2 and cardiorespiratory responses during each interval that are above 80% of estimated maximal values. Although the duration of exercise in SIC is very short, the high level of V[Combining Dot Above]O2 and cardiorespiratory responses are sufficient to potentially elicit adaptations to training associated with elevated aerobic energy demand.

  12. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  13. Approximate line shapes for hydrogen

    NASA Technical Reports Server (NTRS)

    Sutton, K.

    1978-01-01

    Two independent methods are presented for calculating radiative transport within hydrogen lines. In Method 1, a simple equation is proposed for calculating the line shape. In Method 2, the line shape is assumed to be a dispersion profile and an equation is presented for calculating the half half-width. The results obtained for the line shapes and curves of growth by the two approximate methods are compared with similar results using the detailed line shapes by Vidal et al.

  14. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  15. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  16. Ultrafast approximation for phylogenetic bootstrap.

    PubMed

    Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt

    2013-05-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.

  17. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  18. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  19. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

    NASA Astrophysics Data System (ADS)

    Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

    2014-05-01

    Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

  20. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  1. Discrete extrinsic curvatures and approximation of surfaces by polar polyhedra

    NASA Astrophysics Data System (ADS)

    Garanzha, V. A.

    2010-01-01

    Duality principle for approximation of geometrical objects (also known as Eu-doxus exhaustion method) was extended and perfected by Archimedes in his famous tractate “Measurement of circle”. The main idea of the approximation method by Archimedes is to construct a sequence of pairs of inscribed and circumscribed polygons (polyhedra) which approximate curvilinear convex body. This sequence allows to approximate length of curve, as well as area and volume of the bodies and to obtain error estimates for approximation. In this work it is shown that a sequence of pairs of locally polar polyhedra allows to construct piecewise-affine approximation to spherical Gauss map, to construct convergent point-wise approximations to mean and Gauss curvature, as well as to obtain natural discretizations of bending energies. The Suggested approach can be applied to nonconvex surfaces and in the case of multiple dimensions.

  2. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  3. Relativistic Random Phase Approximation At Finite Temperature

    SciTech Connect

    Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.

    2009-08-26

    The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.

  4. Pseudoscalar transition form factors from rational approximants

    NASA Astrophysics Data System (ADS)

    Masjuan, Pere

    2014-06-01

    The π0, η, and η' transition form factors in the space-like region are analyzed at low and intermediate energies in a model-independent way through the use of rational approximants. Slope and curvature parameters as well as their values at infinity are extracted from experimental data. These results are suited for constraining hadronic models such as the ones used for the hadronic light-by-light scattering part of the anomalous magnetic moment of the muon, and for the mixing parameters of the η - η' system.

  5. Approximately Independent Features of Languages

    NASA Astrophysics Data System (ADS)

    Holman, Eric W.

    To facilitate the testing of models for the evolution of languages, the present paper offers a set of linguistic features that are approximately independent of each other. To find these features, the adjusted Rand index (R‧) is used to estimate the degree of pairwise relationship among 130 linguistic features in a large published database. Many of the R‧ values prove to be near zero, as predicted for independent features, and a subset of 47 features is found with an average R‧ of -0.0001. These 47 features are recommended for use in statistical tests that require independent units of analysis.

  6. The structural physical approximation conjecture

    NASA Astrophysics Data System (ADS)

    Shultz, Fred

    2016-01-01

    It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

  7. Quantum tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Ranjan Majhi, Bibhas

    2008-06-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  8. Daily meal anticipation: interaction of circadian and interval timing.

    PubMed

    Terman, M; Gibbon, J; Fairhurst, S; Waring, A

    1984-01-01

    Both short-interval and circadian timing systems support anticipatory response accelerations prior to food reinforcement. In the first case, the behavior pattern is determined by a scalar timing process with an arbitrary-reset property. In contrast, under daily cycles of food-availability, behavior reflects a self-sustaining oscillation. With rats as subjects, the concurrent operation of timing of both kinds was studied by addition of premeal auditory cues on the circadian baseline, in the absence of a day-night illumination cycle. Cues within both minute and hour ranges served to lower the level of premeal anticipatory responding, although exponential accelerations were similar to the uncued case. Cues within the minutes range yielded interval-timing functions that reflected approximate superposition. Cues within the hours range suppressed respondings at their outset, in proportion to cue duration. When one of the shorter cues was suddenly lengthened, short-interval accelerations appeared at inappropriate circadian phases. When a premeal cue was extended through mealtime, anticipation rates increased markedly, suggesting that cue termination at the start of mealtime is a potent anchor for premeal anticipation regardless of cue duration. By use of meal-omission probes without external cues, peak rates were located after the onset of expected mealtime, often near its termination. The results suggest interactions between the scalar interval timer and the circadian anticipation timer, as modulated by the circadian free-run timer.

  9. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  10. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  11. The interval order polytope of a digraph

    SciTech Connect

    Mueller, R.; Schulz, A.

    1994-12-31

    Interval orders and their cocomparability graphs, the interval graphs, are of significant importance as structures of solutions for several combinatorial optimization problems. This is due to the fact that each element is associated with an interval, which may be interpreted as a time interval, for example in a schedule, or as a substring in a string of items, for example, a substring of a DNA string in molecular biology. In the talk we show that the interval order polytope of a digraph may serve as a basis for a polyhedral combinatorial approach to this class of problems. We present results on odd cycle and clique based valid inequalities and discuss the complexity of their separation problem. We show that well-known valid inequalities of the linear ordering polytope, as, e.g., Mobius ladder inequalities and fence inequalities obtain a natural interpretation in terms of these inequalities of the interval order polytope.

  12. Generalized Quasilinear Approximation: Application to Zonal Jets.

    PubMed

    Marston, J B; Chini, G P; Tobias, S M

    2016-05-27

    Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems. PMID:27284660

  13. Plasma Physics Approximations in Ares

    SciTech Connect

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  14. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  15. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  16. Interval Management Display Design Study

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Beyer, Timothy M.; Cooke, Stuart D.; Grant, Karlus A.

    2014-01-01

    In 2012, the Federal Aviation Administration (FAA) estimated that U.S. commercial air carriers moved 736.7 million passengers over 822.3 billion revenue-passenger miles. The FAA also forecasts, in that same report, an average annual increase in passenger traffic of 2.2 percent per year for the next 20 years, which approximates to one-and-a-half times the number of today's aircraft operations and passengers by the year 2033. If airspace capacity and throughput remain unchanged, then flight delays will increase, particularly at those airports already operating near or at capacity. Therefore it is critical to create new and improved technologies, communications, and procedures to be used by air traffic controllers and pilots. National Aeronautics and Space Administration (NASA), the FAA, and the aviation industry are working together to improve the efficiency of the National Airspace System and the cost to operate in it in several ways, one of which is through the creation of the Next Generation Air Transportation System (NextGen). NextGen is intended to provide airspace users with more precise information about traffic, routing, and weather, as well as improve the control mechanisms within the air traffic system. NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) Project is designed to contribute to the goals of NextGen, and accomplishes this by integrating three NASA technologies to enable fuel-efficient arrival operations into high-density airports. The three NASA technologies and procedures combined in the ATD-1 concept are advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted and fuel efficient arrival streams in high-density terminal airspace.

  17. Stimulus properties of fixed-interval responses.

    PubMed

    Buchman, I B; Zeiler, M D

    1975-11-01

    Responses in the first component of a chained schedule produced a change to the terminal component according to a fixed-interval schedule. The number of responses emitted in the fixed interval determined whether a variable-interval schedule of food presentation or extinction prevailed in the terminal component. In one condition, the variable-interval schedule was in effect only if the number of responses during the fixed interval was less than that specified; in another condition, the number of responses had to exceed that specified. The number of responses emitted in the fixed interval did not shift markedly in the direction required for food presentation. Instead, responding often tended to change in the opposite direction. Such an effect indicated that differential food presentation did not modify the reference behavior in accord with the requirement, but it was consistent with other data on fixed-interval schedule performance. Behavior in the terminal component, however, did reveal sensitivity to the relation between total responses emitted in the fixed interval and the availability of food. Response rate in the terminal component was a function of the proximity of the response number emitted in the fixed interval to that required for food presentation. Thus, response number served as a discriminative stimulus controlling subsequent performance.

  18. A note on the path interval distance.

    PubMed

    Coons, Jane Ivy; Rusinko, Joseph

    2016-06-01

    The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important. PMID:27040521

  19. A note on the path interval distance.

    PubMed

    Coons, Jane Ivy; Rusinko, Joseph

    2016-06-01

    The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important.

  20. High-Intensity Interval Exercise and Postprandial Triacylglycerol.

    PubMed

    Burns, Stephen F; Miyashita, Masashi; Stensel, David J

    2015-07-01

    This review examined if high-intensity interval exercise (HIIE) reduces postprandial triacylglycerol (TAG) levels. Fifteen studies were identified, in which the effect of interval exercise conducted at an intensity of >65% of maximal oxygen uptake was evaluated on postprandial TAG levels. Analysis was divided between studies that included supramaximal exercise and those that included submaximal interval exercise. Ten studies examined the effect of a single session of low-volume HIIE including supramaximal sprints on postprandial TAG. Seven of these studies noted reductions in the postprandial total TAG area under the curve the morning after exercise of between ~10 and 21% compared with rest, but three investigations found no significant difference in TAG levels. Variations in the HIIE protocol used, inter-individual variation or insufficient time post-exercise for an increase in lipoprotein lipase activity are proposed reasons for the divergent results among studies. Five studies examined the effect of high-volume submaximal interval exercise on postprandial TAG. Four of these studies were characterised by high exercise energy expenditure and effectively attenuated total postprandial TAG levels by ~15-30%, but one study with a lower energy expenditure found no effect on TAG. The evidence suggests that supramaximal HIIE can induce large reductions in postprandial TAG levels but findings are inconsistent. Submaximal interval exercise offers no TAG metabolic or time advantage over continuous aerobic exercise but could be appealing in nature to some individuals. Future research should examine if submaximal interval exercise can reduce TAG levels in line with more realistic and achievable exercise durations of 30 min per day.

  1. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  2. Variational extensions of the mean spherical approximation

    NASA Astrophysics Data System (ADS)

    Blum, L.; Ubriaco, M.

    2000-04-01

    In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.

  3. Communication: Improved pair approximations in local coupled-cluster methods

    SciTech Connect

    Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

    2015-03-28

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  4. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  5. New generalized gradient approximation functionals

    NASA Astrophysics Data System (ADS)

    Boese, A. Daniel; Doltsinis, Nikos L.; Handy, Nicholas C.; Sprik, Michiel

    2000-01-01

    New generalized gradient approximation (GGA) functionals are reported, using the expansion form of A. D. Becke, J. Chem. Phys. 107, 8554 (1997), with 15 linear parameters. Our original such GGA functional, called HCTH, was determined through a least squares refinement to data of 93 systems. Here, the data are extended to 120 systems and 147 systems, introducing electron and proton affinities, and weakly bound dimers to give the new functionals HCTH/120 and HCTH/147. HCTH/120 has already been shown to give high quality predictions for weakly bound systems. The functionals are applied in a comparative study of the addition reaction of water to formaldehyde and sulfur trioxide, respectively. Furthermore, the performance of the HCTH/120 functional in Car-Parrinello molecular dynamics simulations of liquid water is encouraging.

  6. Interplay of approximate planning strategies.

    PubMed

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

  7. Indexing the approximate number system.

    PubMed

    Inglis, Matthew; Gilmore, Camilla

    2014-01-01

    Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects. PMID:24361686

  8. Approximate penetration factors for nuclear reactions of astrophysical interest

    NASA Technical Reports Server (NTRS)

    Humblet, J.; Fowler, W. A.; Zimmerman, B. A.

    1987-01-01

    The ranges of validity of approximations of P(l), the penetration factor which appears in the parameterization of nuclear-reaction cross sections at low energies and is employed in the extrapolation of laboratory data to even lower energies of astrophysical interest, are investigated analytically. Consideration is given to the WKB approximation, P(l) at the energy of the total barrier, approximations derived from the asymptotic expansion of G(l) for large eta, approximations for small values of the parameter x, applications of P(l) to nuclear reactions, and the dependence of P(l) on channel radius. Numerical results are presented in tables and graphs, and parameter ranges where the danger of serious errors is high are identified.

  9. Interval and Contour Processing in Autism

    ERIC Educational Resources Information Center

    Heaton, Pamela

    2005-01-01

    High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…

  10. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

    EPA Science Inventory

    Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

  11. Low rank approximation in G 0 W 0 calculations

    NASA Astrophysics Data System (ADS)

    Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.

    2016-08-01

    The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.

  12. Facies and reservoir characterization of an upper Smackover interval, East Barnett Field, Conecuh County, Alabama

    SciTech Connect

    Bergan, G.R. ); Hearne, J.H. )

    1990-09-01

    Excellent production from an upper Smackover (Jurassic) ooid grainstone was established in April 1988 by Coastal Oil and Gas Corporation with the discovery of the East Barnett field in Conecuh County, Alabama. A structure map on the top of the Smackover Formation and net porosity isopach map of the producing intervals show that the trapping mechanism at the field has both structural and stratigraphic components. Two diamond cores were cut from 13,580 to 13,701 ft, beginning approximately 20 ft below the top of the Smackover. Two shallowing-upward sequences are identified in the cores. The first sequence starts at the base of the cored interval and is characterized by thick, subtidal algal boundstones capped by a collapse breccia facies. This entire sequence was deposited in the shallow subtidal to lower intertidal zone. Subsequent lowering of sea level exposed the top portion of the boundstones to meteoric or mixing zone waters, creating the diagenetic, collapse breccia facies. The anhydrite associated with the breccia also indicates surface exposure. The second sequence begins with algal boundstones that sharply overlie the collapse breccia facies of the previous sequence. These boundstones grade upward into high-energy, cross-bedded ooid beach ( ) and oncoidal, peloidal beach shoreface deposits. Proximity of the overlying Buckner anhydrite, representing a probable sabkha system, favors a beach or a very nearshore shoal interpretation for the ooid grainstones. The ooid grainstone facies, which is the primary producing interval, has measured porosity values ranging from 5.3% to 17.8% and averaging 11.0%. Measured permeability values range from 0.04 md to 701 md and average 161.63 md. These high porosity and permeability values result from abundant primary intergranular pore space, as well as secondary pore space created by dolomitization and dissolution of framework grains.

  13. Exercise-induced hypoalgesia - interval versus continuous mode.

    PubMed

    Kodesh, Einat; Weissman-Fogel, Irit

    2014-07-01

    Aerobic exercise at approximately 70% of maximal aerobic capacity moderately reduces pain sensitivity and attenuates pain, even after a single session. If the analgesic effects depend on exercise intensity, then high-intensity interval exercise at 85% of maximal aerobic capacity should further reduce pain. The aim of this study was to explore the exercise-induced analgesic effects of high-intensity interval aerobic exercise and to compare them with the analgesic effects of moderate continuous aerobic exercise. Twenty-nine young untrained healthy males were randomly assigned to aerobic-continuous (70% heart rate reserve (HRR)) and interval (4 × 4 min at 85% HRR and 2 min at 60% HRR between cycles) exercise modes, each lasting 30 min. Psychophysical pain tests, pressure and heat pain thresholds (HPT), and tonic heat pain (THP) were conducted before and after exercise sessions. Repeated measures ANOVA was used for data analysis. HPT increased (p = 0.056) and THP decreased (p = 0.013) following exercise unrelated to exercise type. However, the main time effect (pre-/postexercise) was a trend of increased HPT (45.6 ± 1.9 °C to 46.2 ± 1.8 °C; p = 0.082) and a significant reduction in THP (from 50.7 ± 25 to 45.9 ± 25.4 numeric pain scale; p = 0.043) following interval exercise. No significant change was found for the pressure pain threshold following either exercise type. In conclusion, interval exercise (85% HRR) has analgesic effects on experimental pain perception. This, in addition to its cardiovascular, muscular, and metabolic advantages may promote its inclusion in pain management programs. PMID:24773287

  14. Interval colorectal carcinoma: An unsolved debate

    PubMed Central

    Benedict, Mark; Neto, Antonio Galvao; Zhang, Xuchen

    2015-01-01

    Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

  15. Constructing Confidence Intervals for Qtl Location

    PubMed Central

    Mangin, B.; Goffinet, B.; Rebai, A.

    1994-01-01

    We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

  16. Interspike interval statistics of neurons driven by colored noise.

    PubMed

    Lindner, Benjamin

    2004-02-01

    A perfect integrate-and-fire model driven by colored noise is studied by means of the interspike interval (ISI) density and the serial correlation coefficient. Exact and approximate expressions for these functions are derived for weak dichotomous or Gaussian noise, respectively. It is shown that correlations in the input result in positive correlations in the ISI sequence and in a reduction of ISI variability. The results also indicate that for weak noise, the noise distribution only shapes the ISI density but not the ISI correlations which are determined by the noise's correlation function.

  17. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  18. Interplay of approximate planning strategies

    PubMed Central

    Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

    2015-01-01

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

  19. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  20. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  1. Microsatellite Instability Status of Interval Colorectal Cancers in a Korean Population

    PubMed Central

    Lee, Kil Woo; Park, Soo-Kyung; Yang, Hyo-Joon; Jung, Yoon Suk; Choi, Kyu Yong; Kim, Kyung Eun; Jung, Kyung Uk; Kim, Hyung Ook; Kim, Hungdai; Chun, Ho-Kyung; Park, Dong Il

    2016-01-01

    Background/Aims A subset of patients may develop colorectal cancer after a colonoscopy that is negative for malignancy. These missed or de novo lesions are referred to as interval cancers. The aim of this study was to determine whether interval colon cancers are more likely to result from the loss of function of mismatch repair genes than sporadic cancers and to demonstrate microsatellite instability (MSI). Methods Interval cancer was defined as a cancer that was diagnosed within 5 years of a negative colonoscopy. Among the patients who underwent an operation for colorectal cancer from January 2013 to December 2014, archived cancer specimens were evaluated for MSI by sequencing microsatellite loci. Results Of the 286 colon cancers diagnosed during the study period, 25 (8.7%) represented interval cancer. MSI was found in eight of the 25 patients (32%) that presented interval cancers compared with 22 of the 261 patients (8.4%) that presented sporadic cancers (p=0.002). In the multivariable logistic regression model, MSI was associated with interval cancer (OR, 3.91; 95% confidence interval, 1.38 to 11.05). Conclusions Interval cancers were approximately four times more likely to show high MSI than sporadic cancers. Our findings indicate that certain interval cancers may occur because of distinct biological features. PMID:27114419

  2. Magnus approximation in neutrino oscillations

    NASA Astrophysics Data System (ADS)

    Acero, Mario A.; Aguilar-Arevalo, Alexis A.; D'Olivo, J. C.

    2011-04-01

    Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.

  3. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  4. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  5. Assessing uncertainty in reference intervals via tolerance intervals: application to a mixed model describing HIV infection.

    PubMed

    Katki, Hormuzd A; Engels, Eric A; Rosenberg, Philip S

    2005-10-30

    We define the reference interval as the range between the 2.5th and 97.5th percentiles of a random variable. We use reference intervals to compare characteristics of a marker of disease progression between affected populations. We use a tolerance interval to assess uncertainty in the reference interval. Unlike the tolerance interval, the estimated reference interval does not contains the true reference interval with specified confidence (or credibility). The tolerance interval is easy to understand, communicate and visualize. We derive estimates of the reference interval and its tolerance interval for markers defined by features of a linear mixed model. Examples considered are reference intervals for time trends in HIV viral load, and CD4 per cent, in HIV-infected haemophiliac children and homosexual men. We estimate the intervals with likelihood methods and also develop a Bayesian model in which the parameters are estimated via Markov-chain Monte Carlo. The Bayesian formulation naturally overcomes some important limitations of the likelihood model. PMID:16189804

  6. Acceleration-induced electrocardiographic interval changes.

    PubMed

    Whinnery, C C; Whinnery, J E

    1988-02-01

    The electrocardiographic intervals (PR, QRS, QT, and RR) before, during, and post +Gz stress were measured in 24 healthy male subjects undergoing +Gz centrifuge exposure. The PR and QRS intervals responded in a predictable manner, shortening during stress and returning to baseline resting values post-stress. The QT interval, however, was not observed to be dependent solely on heart rate. Bazett's formula, which was developed to correct for heart rate variability, did not adequately result in a homogeneous correction of the QT interval for each stress-related period. During +Gz stress, the QT was shortened, and the QTc prolonged. The QT interval remained shortened even though the heart rate returned to baseline (with the QTc undercorrected) in the post-stress period. The QT (QTc) interval variations probably reflect the effects of both heart rate and autonomic balance during and after +Gz stress, and may provide a measure of the prevailing autonomic (sympathetic or parasympathetic) tone existing at a given point associated with +Gz stress. These electrocardiographic interval changes define the normal response for healthy individuals. Individuals with exaggerated autonomic responses could be identified by comparing their responses to these normal responses resulting from +Gz stress. PMID:3345170

  7. Interval modeling of dynamics for multibody systems

    NASA Astrophysics Data System (ADS)

    Auer, Ekaterina

    2007-02-01

    Modeling of multibody systems is an important though demanding field of application for interval arithmetic. Interval modeling of dynamics is particularly challenging, not least because of the differential equations which have to be solved in the process. Most modeling tools transform these equations into a (non-autonomous) initial value problem, interval algorithms for solving of which are known. The challenge then consists in finding interfaces between these algorithms and the modeling tools. This includes choosing between "symbolic" and "numerical" modeling environments, transforming the usually non-autonomous resulting system into an autonomous one, ensuring conformity of the new interval version to the old numerical, etc. In this paper, we focus on modeling multibody systems' dynamics with the interval extension of the "numerical" environment MOBILE, discuss the techniques which make the uniform treatment of interval and non-interval modeling easier, comment on the wrapping effect, and give reasons for our choice of MOBILE by comparing the results achieved with its help with those obtained by analogous symbolic tools.

  8. The microanalysis of fixed-interval responding

    PubMed Central

    Gentry, G. David; Weiss, Bernard; Laties, Victor G.

    1983-01-01

    The fixed-interval schedule of reinforcement is one of the more widely studied schedules in the experimental analysis of behavior and is also a common baseline for behavior pharmacology. Despite many intensive studies, the controlling variables and the pattern of behavior engendered are not well understood. The present study examined the microstructure and superstructure of the behavior engendered by a fixed-interval 5- and a fixed-interval 15-minute schedule of food reinforcement in the pigeon. Analysis of performance typical of fixed-interval responding indicated that the scalloped pattern does not result from smooth acceleration in responding, but, rather, from renewed pausing early in the interval. Individual interresponse-time (IRT) analyses provided no evidence of acceleration. There was a strong indication of alternation in shorter-longer IRTs, but these shorter-longer IRTs did not occur at random, reflecting instead a sequential dependency in successive IRTs. Furthermore, early in the interval there was a high relative frequency of short IRTs. Such a pattern of early pauses and short IRTs does not suggest behavior typical of reinforced responding as exemplified by the pattern found near the end of the interval. Thus, behavior from clearly scalloped performance can be classified into three states: postreinforcement pause, interim behavior, and terminal behavior. PMID:16812324

  9. Microanalysis of fixed-interval responding

    SciTech Connect

    Gentry, G.D.; Weiss, B.; Laties, V.G.

    1983-03-01

    The fixed-interval schedule of reinforcement is one of the more widely studied schedules in the experimental analysis of behavior and is also a common baseline for behavior pharmacology. Despite many intensive studies, the controlling variables and the pattern of behavior engendered are not well understood. The present study examined the microstructure and superstructure of the behavior engendered by a fixed-interval 5- and a fixed-interval 15-minute schedule of food reinforcement in the pigeon. Analysis of performance typical of fixed-interval responding indicated that the scalloped pattern does not result from smooth acceleration in responding, but, rather, from renewed pausing early in the interval. Individual interresponse-time (IRT) analyses provided no evidence of acceleration. There was a strong indication of alternation is shorter-longer IRTs, but these shorter-longer IRTs did not occur at random, reflecting instead a sequential dependency in successive IRTs. Furthermore, early in the interval there was a high relative frequency of short IRTs. Such a pattern of early pauses and short IRTs does not suggest behavior typical of reinforced responding as exemplified by the pattern found near the end of the interval. Thus, behavior from clearly scalloped performance can be classified into three states: postreinforcement pause, interim behavior, and terminal behavior. 31 references, 11 figures, 4 tables.

  10. Inexact rough-interval two-stage stochastic programming for conjunctive water allocation problems.

    PubMed

    Lu, Hongwei; Huang, Guohe; He, Li

    2009-10-01

    An inexact rough-interval two-stage stochastic programming (IRTSP) method is developed for conjunctive water allocation problems. Rough intervals (RIs), as a particular case of rough sets, are introduced into the modeling framework to tackle dual-layer information provided by decision makers. Through embeding upper and lower approximation intervals, rough intervals are capable of reflecting complex parameters with the most reliable and possible variation ranges being identified. An interactive solution method is also derived. A conjunctive water-allocation system is then structured for characterizing the proposed model. Solutions indicate a detailed optimal allocation scheme with a rough-interval form; a total of [[1048.83, 2078.29]:[1482.26, 2020.60

  11. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  12. Estimation of postmortem interval based on colony development time for Anoplolepsis longipes (Hymenoptera: Formicidae).

    PubMed

    Goff, M L; Win, B H

    1997-11-01

    The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months. PMID:9397565

  13. Estimation of postmortem interval based on colony development time for Anoplolepsis longipes (Hymenoptera: Formicidae).

    PubMed

    Goff, M L; Win, B H

    1997-11-01

    The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.

  14. Advanced Interval Management: A Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  15. Importance of QT interval in clinical practice.

    PubMed

    Ambhore, Anand; Teo, Swee-Guan; Bin Omar, Abdul Razakjr; Poh, Kian-Keong

    2014-12-01

    Long QT interval is an important finding that is often missed by electrocardiogram interpreters. Long QT syndrome (inherited and acquired) is a potentially lethal cardiac channelopathy that is frequently mistaken for epilepsy. We present a case of long QT syndrome with multiple cardiac arrests presenting as syncope and seizures. The long QTc interval was aggravated by hypomagnesaemia and drugs, including clarithromycin and levofloxacin. Multiple drugs can cause prolongation of the QT interval, and all physicians should bear this in mind when prescribing these drugs.

  16. Determination of short-term error caused by the reference clock in precision time-interval measurement and generation

    NASA Astrophysics Data System (ADS)

    Kalisz, Jozef

    1988-06-01

    A simple analysis based on the randomized clock cycle T(o) yields a useful formula on its variance in terms of the Allan variance. The short-term uncertainty of the measured or generated time interval t is expressed by the standard deviation in an approximate form as a function of the Allen variance. The estimates obtained are useful for determining the measurement uncertainty of time intervals within the approximate range of 10 ms-100 s.

  17. Improved effective vector boson approximation revisited

    NASA Astrophysics Data System (ADS)

    Bernreuther, Werner; Chen, Long

    2016-03-01

    We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.

  18. Robust inter-beat interval estimation in cardiac vibration signals.

    PubMed

    Brüser, C; Winter, S; Leonhardt, S

    2013-02-01

    Reliable and accurate estimation of instantaneous frequencies of physiological rhythms, such as heart rate, is critical for many healthcare applications. Robust estimation is especially challenging when novel unobtrusive sensors are used for continuous health monitoring in uncontrolled environments, because these sensors can create significant amounts of potentially unreliable data. We propose a new flexible algorithm for the robust estimation of local (beat-to-beat) intervals from cardiac vibration signals, specifically ballistocardiograms (BCGs), recorded by an unobtrusive bed-mounted sensor. This sensor allows the measurement of motions of the body which are caused by cardiac activity. Our method requires neither a training phase nor any prior knowledge about the morphology of the heart beats in the analyzed waveforms. Instead, three short-time estimators are combined using a Bayesian approach to continuously estimate the inter-beat intervals. We have validated our method on over-night BCG recordings from 33 subjects (8 normal, 25 insomniacs). On this dataset, containing approximately one million heart beats, our method achieved a mean beat-to-beat interval error of 0.78% with a coverage of 72.69%.

  19. Visual feedback for retuning to just intonation intervals

    NASA Astrophysics Data System (ADS)

    Ayers, R. Dean; Nordquist, Peter R.; Corn, Justin S.

    2005-04-01

    Musicians become used to equal temperament pitch intervals due to their widespread use in tuning pianos and other fixed-pitch instruments. For unaccompanied singing and some other performance situations, a more harmonious blending of sounds can be achieved by shifting to just intonation intervals. Lissajous figures provide immediate and striking visual feedback that emphasizes the frequency ratios and pitch intervals found among the first few members of a single harmonic series. Spirograph patterns (hypotrochoids) are also especially simple for ratios of small whole numbers, and their use for providing feedback to singers has been suggested previously [G. W. Barton, Jr., Am. J. Phys. 44(6), 593-594 (1976)]. A hybrid mixture of these methods for comparing two frequencies generates what appears to be a three dimensional Lissajous figure-a cylindrical wire mesh that rotates about its tilted vertical axis, with zero tilt yielding the familiar Lissajous figure. Sine wave inputs work best, but the sounds of flute, recorder, whistling, and a sung ``oo'' are good enough approximations to work well. This initial study compares the three modes of presentation in terms of the ease with which a singer can obtain a desired pattern and recognize its shape.

  20. Globally convergent autocalibration using interval analysis.

    PubMed

    Fusiello, Andrea; Benedetti, Arrigo; Farenzena, Michela; Busti, Alessandro

    2004-12-01

    We address the problem of autocalibration of a moving camera with unknown constant intrinsic parameters. Existing autocalibration techniques use numerical optimization algorithms whose convergence to the correct result cannot be guaranteed, in general. To address this problem, we have developed a method where an interval branch-and-bound method is employed for numerical minimization. Thanks to the properties of Interval Analysis this method converges to the global solution with mathematical certainty and arbitrary accuracy and the only input information it requires from the user are a set of point correspondences and a search interval. The cost function is based on the Huang-Faugeras constraint of the essential matrix. A recently proposed interval extension based on Bernstein polynomial forms has been investigated to speed up the search for the solution. Finally, experimental results are presented. PMID:15573823

  1. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  2. Predictive intervals for age-specific fertility.

    PubMed

    Keilman, N; Pham, D Q

    2000-03-01

    A multivariate ARIMA model is combined with a Gamma curve to predict confidence intervals for age-specific birth rates by 1-year age groups. The method is applied to observed age-specific births in Norway between 1900 and 1995, and predictive intervals are computed for each year up to 2050. The predicted two-thirds confidence intervals for Total Fertility (TF) around 2010 agree well with TF errors in old population forecasts made by Statistics Norway. The method gives useful predictions for age-specific fertility up to the years 2020-30. For later years, the intervals become too wide. Methods that do not take into account estimation errors in the ARIMA model coefficients underestimate the uncertainty for future TF values. The findings suggest that the margin between high and low fertility variants in official population forecasts for many Western countries are too narrow. PMID:12158991

  3. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  4. Almost primes in almost all short intervals

    NASA Astrophysics Data System (ADS)

    TERÄVÄINEN, JONI

    2016-09-01

    Let $E_k$ be the set of positive integers having exactly $k$ prime factors. We show that almost all intervals $[x,x+\\log^{1+\\varepsilon} x]$ contain $E_3$ numbers, and almost all intervals $[x,x+\\log^{3.51} x]$ contain $E_2$ numbers. By this we mean that there are only $o(X)$ integers $1\\leq x\\leq X$ for which the mentioned intervals do not contain such numbers. The result for $E_3$ numbers is optimal up to the $\\varepsilon$ in the exponent. The theorem on $E_2$ numbers improves a result of Harman, which had the exponent $7+\\varepsilon$ in place of $3.51$. We will also consider general $E_k$ numbers, and find them on intervals whose lengths approach $\\log x$ as $k\\to \\infty$.

  5. Calibration intervals at Bendix Kansas City

    SciTech Connect

    James, R.T.

    1980-01-01

    The calibration interval evaluation methods and control in each calibrating department of the Bendix Corp., Kansas City Division is described, and a more detailed description of those employed in metrology is provided.

  6. SOPIE: Sequential Off-Pulse Interval Estimation

    NASA Astrophysics Data System (ADS)

    Schutte, Willem D.

    2016-07-01

    SOPIE (Sequential Off-Pulse Interval Estimation) provides functions to non-parametrically estimate the off-pulse interval of a source function originating from a pulsar. The technique is based on a sequential application of P-values obtained from goodness-of-fit tests for the uniform distribution, such as the Kolmogorov-Smirnov, Cramér-von Mises, Anderson-Darling and Rayleigh goodness-of-fit tests.

  7. Probability Distribution for Flowing Interval Spacing

    SciTech Connect

    S. Kuzio

    2004-09-22

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  8. Automatic detection of atrial fibrillation using the coefficient of variation and density histograms of RR and deltaRR intervals.

    PubMed

    Tateno, K; Glass, L

    2001-11-01

    The paper describes a method for the automatic detection of atrial fibrillation, an abnormal heart rhythm, based on the sequence of intervals between heartbeats. The RR interval is the interbeat interval, and deltaRR is the difference between two successive RR intervals. Standard density histograms of the RR and deltaRR intervals were prepared as templates for atrial fibrillation detection. As the coefficients of variation of the RR and deltaRR intervals were approximately constant during atrial fibrillation, the coefficients of variation in the test data could be compared with the standard coefficients of variation (CV test). Further, the similarities between the density histograms of the test data and the standard density histograms were estimated using the Kolmogorov-Smirnov test. The CV test based on the RR intervals showed a sensitivity of 86.6% and a specificity of 84.3%. The CV test based on the deltaRR intervals showed that the sensitivity and the specificity are both approximately 84%. The Kolmogorov-Smirnov test based on the RR intervals did not improve on the result of the CV test. In contrast, the Kolmogorov-Smirnov test based on the ARR intervals showed a sensitivity of 94.4% and a specificity of 97.2%.

  9. Multifactor analysis of multiscaling in volatility return intervals

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10<⟨τ⟩⩽100 by a power law, μm˜⟨τ⟩δ . The exponent δ is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of γ . Moreover, we show that δ decreases with increasing γ approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.

  10. Kernel polynomial approximations for densities of states and spectral functions

    SciTech Connect

    Silver, R.N.; Voter, A.F.; Kress, J.D.; Roeder, H.

    1996-03-01

    Chebyshev polynomial approximations are an efficient and numerically stable way to calculate properties of the very large Hamiltonians important in computational condensed matter physics. The present paper derives an optimal kernal polynomial which enforces positivity of density of states and spectral estimates, achieves the best energy resolution, and preserves normalization. This kernel polynomial method (KPM) is demonstrated for electronic structure and dynamic magnetic susceptibility calculations. For tight binding Hamiltonians of Si, we show how to achieve high precision and rapid convergence of the cohesive energy and vacancy formation energy by careful attention to the order of approximation. For disordered XXZ-magnets, we show that the KPM provides a simpler and more reliable procedure for calculating spectral functions than Lanczos recursion methods. Polynomial approximations to Fermi projection operators are also proposed. 26 refs., 10 figs.

  11. Performances on ratio and interval schedules of reinforcement: Data and theory.

    PubMed

    Baum, W M

    1993-03-01

    TWO DIFFERENCES BETWEEN RATIO AND INTERVAL PERFORMANCE ARE WELL KNOWN: (a) Higher rates occur on ratio schedules, and (b) ratio schedules are unable to maintain responding at low rates of reinforcement (ratio "strain"). A third phenomenon, a downturn in response rate at the highest rates of reinforcement, is well documented for ratio schedules and is predicted for interval schedules. Pigeons were exposed to multiple variable-ratio variable-interval schedules in which the intervals generated in the variable-ratio component were programmed in the variable-interval component, thereby "yoking" or approximately matching reinforcement in the two components. The full range of ratio performances was studied, from strained to continuous reinforcement. In addition to the expected phenomena, a new phenomenon was observed: an upturn in variable-interval response rate in the midrange of rates of reinforcement that brought response rates on the two schedules to equality before the downturn at the highest rates of reinforcement. When the average response rate was corrected by eliminating pausing after reinforcement, the downturn in response rate vanished, leaving a strictly monotonic performance curve. This apparent functional independence of the postreinforcement pause and the qualitative shift in response implied by the upturn in variable-interval response rate suggest that theoretical accounts will require thinking of behavior as partitioned among at least three categories, and probably four: postreinforcement activity, other unprogrammed activity, ratio-typical operant behavior, and interval-typical operant behavior.

  12. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  13. Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  14. Generating exact solutions to Einstein's equation using linearized approximations

    NASA Astrophysics Data System (ADS)

    Harte, Abraham I.; Vines, Justin

    2016-10-01

    We show that certain solutions to the linearized Einstein equation can—by the application of a particular type of linearized gauge transformation—be straightforwardly transformed into solutions of the exact Einstein equation. In cases with nontrivial matter content, the exact stress-energy tensor of the transformed metric has the same eigenvalues and eigenvectors as the linearized stress-energy tensor of the initial approximation. When our gauge exists, the tensorial structure of transformed metric perturbations identically eliminates all nonlinearities in Einstein's equation. As examples, we derive the exact Kerr and gravitational plane wave metrics from standard harmonic-gauge approximations.

  15. Musical intervals and relative pitch: Frequency resolution, not interval resolution, is special

    PubMed Central

    McDermott, Josh H.; Keebler, Michael V.; Micheyl, Christophe; Oxenham, Andrew J.

    2010-01-01

    Pitch intervals are central to most musical systems, which utilize pitch at the expense of other acoustic dimensions. It seemed plausible that pitch might uniquely permit precise perception of the interval separating two sounds, as this could help explain its importance in music. To explore this notion, a simple discrimination task was used to measure the precision of interval perception for the auditory dimensions of pitch, brightness, and loudness. Interval thresholds were then expressed in units of just-noticeable differences for each dimension, to enable comparison across dimensions. Contrary to expectation, when expressed in these common units, interval acuity was actually worse for pitch than for loudness or brightness. This likely indicates that the perceptual dimension of pitch is unusual not for interval perception per se, but rather for the basic frequency resolution it supports. The ubiquity of pitch in music may be due in part to this fine-grained basic resolution. PMID:20968366

  16. Sunspot Time Series: Passive and Active Intervals

    NASA Astrophysics Data System (ADS)

    Zięba, S.; Nieckarz, Z.

    2014-07-01

    Solar activity slowly and irregularly decreases from the first spotless day (FSD) in the declining phase of the old sunspot cycle and systematically, but also in an irregular way, increases to the new cycle maximum after the last spotless day (LSD). The time interval between the first and the last spotless day can be called the passive interval (PI), while the time interval from the last spotless day to the first one after the new cycle maximum is the related active interval (AI). Minima of solar cycles are inside PIs, while maxima are inside AIs. In this article, we study the properties of passive and active intervals to determine the relation between them. We have found that some properties of PIs, and related AIs, differ significantly between two group of solar cycles; this has allowed us to classify Cycles 8 - 15 as passive cycles, and Cycles 17 - 23 as active ones. We conclude that the solar activity in the PI declining phase (a descending phase of the previous cycle) determines the strength of the approaching maximum in the case of active cycles, while the activity of the PI rising phase (a phase of the ongoing cycle early growth) determines the strength of passive cycles. This can have implications for solar dynamo models. Our approach indicates the important role of solar activity during the declining and the rising phases of the solar-cycle minimum.

  17. Dietary reference intervals for vitamin D.

    PubMed

    Cashman, Kevin D

    2012-01-01

    Dietary reference intervals relate to the distribution of dietary requirement for a particular nutrient as defined by the distribution of physiological requirement for that nutrient. These have more commonly been called Dietary Reference Values (DRV) or Dietary Reference Intakes (DRI), amongst other names. The North American DRI for vitamin D are the most current dietary reference intervals and arguably arising from the most comprehensive evaluation and report on vitamin D nutrition to date. These are a family of nutrient reference values, including the Estimated Average Requirement (EAR), the Recommended Dietary Allowance (RDA), the Adequate Intake, and Tolerable Upper Intake Level. In particular, the EAR is used for planning and assessing diets of populations; it also serves as the basis for calculating the RDA, a value intended to meet the needs of nearly all people. The DRVs for vitamin D in the UK and the European Community have been in existence for almost two decades, and both are currently under review. The present paper briefly overviews these three sets of dietary reference intervals as case studies to highlight both the similarities as well as possible differences that may exist between reference intervals for vitamin D in different countries/regions. In addition, it highlights the scientific basis upon which these are based, which may explain some of the differences. Finally, it also overviews how the dietary reference intervals for vitamin D may be applied, and especially in terms of assessing the adequacy of vitamin D intake in populations. PMID:22536775

  18. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  19. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2006-01-01

    Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

  20. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  1. Experimental Design for Stochastic Models of Nonlinear Signaling Pathways Using an Interval-Wise Linear Noise Approximation and State Estimation

    PubMed Central

    Zimmer, Christoph

    2016-01-01

    Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802

  2. Pigeons' choices between fixed-interval and random-interval schedules: utility of variability?

    PubMed

    Andrzejewski, Matthew E; Cardinal, Claudia D; Field, Douglas P; Flannery, Barbara A; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N

    2005-03-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the fixed-interval schedule. Thus the programmed delays to reinforcement on the random alternative were never shorter and were often longer than the fixed interval. Despite this feature, the fixed schedule was not strongly preferred. Increases in the probability used to generate the random interval resulted in decreased preferences for the fixed schedule. In addition, the number of consecutive choices on the preferred alternative varied directly with preference, whereas the consecutive number of choices on the nonpreferred alternative was fairly constant. The probability of choosing the random alternative was unaffected by the immediately prior interval encountered on that schedule, even when it was very long relative to the average value. The results loosely support conceptions of a "preference for variability" from foraging theory and the "utility of behavioral variability" from human decision-making literatures.

  3. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    NASA Astrophysics Data System (ADS)

    Bogachev, Mikhail I.; Kireenkov, Igor S.; Nifontov, Eugene M.; Bunde, Armin

    2009-06-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  4. Radial basis function networks with linear interval regression weights for symbolic interval data.

    PubMed

    Su, Shun-Feng; Chuang, Chen-Chia; Tao, C W; Jeng, Jin-Tsong; Hsiao, Chih-Ching

    2012-02-01

    This paper introduces a new structure of radial basis function networks (RBFNs) that can successfully model symbolic interval-valued data. In the proposed structure, to handle symbolic interval data, the Gaussian functions required in the RBFNs are modified to consider interval distance measure, and the synaptic weights of the RBFNs are replaced by linear interval regression weights. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the center and range of the interval-valued data are considered. In addition, in the proposed approach, two stages of learning mechanisms are proposed. In stage 1, an initial structure (i.e., the number of hidden nodes and the adjustable parameters of radial basis functions) of the proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In stage 2, a gradient-descent kind of learning algorithm is applied to fine-tune the parameters of the radial basis function and the coefficients of the linear interval regression weights. Various experiments are conducted, and the average behavior of the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are considered as the performance index. The results clearly show the effectiveness of the proposed structure.

  5. Perceptual interference decays over short unfilled intervals.

    PubMed

    Schulkind, M D

    2000-09-01

    The perceptual interference effect refers to the fact that object identification is directly related to the amount of information available at initial exposure. The present article investigated whether perceptual interference would dissipate when a short, unfilled interval was introduced between exposures to a degraded object. Across three experiments using both musical and pictorial stimuli, identification performance increased directly with the length of the unfilled interval. Consequently, significant perceptual interference was obtained only when the interval between exposures was relatively short (< 500 msec for melodies; < 300 msec for pictures). These results are consistent with explanations that attribute perceptual interference to increased perceptual noise created by exposures to highly degraded objects. The data also suggest that perceptual interference is mediated by systems that are not consciously controlled by the subject and that perceptual interference in the visual domain decays more rapidly than perceptual interference in the auditory domain. PMID:11105520

  6. Atomic momentum patterns with narrower intervals

    NASA Astrophysics Data System (ADS)

    Yang, Baoguo; Jin, Shengjie; Dong, Xiangyu; Liu, Zhe; Yin, Lan; Zhou, Xiaoji

    2016-10-01

    We studied the atomic momentum distribution of a superposition of Bloch states in the lowest band of an optical lattice after the action of the standing-wave pulse. By designing the imposing pulse on this superposed state, an atomic momentum pattern appears with a narrow interval between the adjacent peaks that can be far less than double recoil momentum. The patterns with narrower interval come from the effect of the designed pulse on the superposition of many Bloch states with quasimomenta throughout the first Brillouin zone. Our experimental result of narrow interval peaks is consistent with the theoretical simulation. The patterns of multiple modes with different quasimomenta may be helpful for precise measurement and atomic manipulation.

  7. New delay-interval stability condition

    NASA Astrophysics Data System (ADS)

    Souza, Fernando O.; Palhares, Reinaldo M.

    2014-03-01

    The delay-dependent stability problem for systems with time-delay varying in an interval is addressed in this article. The new idea in this article is to connect two very efficient approaches: the discretised Lyapunov functional for systems with pointwise delay and the convex analysis for systems with time-varying delay. The proposed method is able to check the stability interval when the time-varying delay d(t) belongs to an interval [r, τ]. The case of unstable delayed systems for r = 0 is also treatable. The resulting criterion, expressed in terms of a convex optimisation problem, outperforms the existing ones in the literature, as illustrated by the numerical examples.

  8. Interval prediction in structural dynamic analysis

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.; Ross, Timothy J.

    1992-01-01

    Methods for assessing the predictive accuracy of structural dynamic models are examined with attention given to the effects of modal mass, stiffness, and damping uncertainties. The methods are based on a nondeterministic analysis called 'interval prediction' in which interval variables are used to describe parameters and responses that are unknown. Statistical databases for generic modeling uncertainties are derived from experimental data and incorporated analytically to evaluate responses. Covariance matrices of modal mass, stiffness, and damping parameters are propagated numerically in models of large space structures by means of three methods. The test data tend to fall within the predicted intervals of uncertainty determined by the statistical databases. The present findings demonstrate the suitability of using data from previously analyzed and tested space structures for assessing the predictive accuracy of an analytical model.

  9. On the Effective Construction of Compactly Supported Wavelets Satisfying Homogenous Boundary Conditions on the Interval

    NASA Technical Reports Server (NTRS)

    Chiavassa, G.; Liandrat, J.

    1996-01-01

    We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.

  10. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  11. The Distribution of Phonated Intervals in the Speech of Individuals Who Stutter

    ERIC Educational Resources Information Center

    Godinho, Tara; Ingham, Roger J.; Davidow, Jason; Cotton, John

    2006-01-01

    Purpose: Previous research has demonstrated the fluency-improving effect of reducing the occurrence of short-duration, phonated intervals (PIs; approximately 30-150 ms) in individuals who stutter, prompting the hypothesis that PIs in these individuals' speech are not distributed normally, particularly in the short PI ranges. It has also been…

  12. Optimal Colonoscopy Surveillance Interval after Polypectomy

    PubMed Central

    Kim, Tae Oh

    2016-01-01

    The detection and removal of adenomatous polyps and postpolypectomy surveillance are considered important for the control of colorectal cancer (CRC). Surveillance using colonoscopy is an effective tool for preventing CRC after colorectal polypectomy, especially if compliance is good. In current practice, the intervals between colonoscopies after polypectomy are variable. Different recommendations for recognizing at risk groups and defining surveillance intervals after an initial finding of colorectal adenomas have been published. However, high-grade dysplasia and the number and size of adenomas are known major cancer predictors. Based on this, a subgroup of patients that may benefit from intensive surveillance colonoscopy can be identified. PMID:27484812

  13. Feedback precision and postfeedback interval duration

    NASA Technical Reports Server (NTRS)

    Rogers, C. A., Jr.

    1974-01-01

    Precision of feedback gain was manipulated in a simple positioning task. An optimum was found; an increase in precision past that optimum produced deleterious effects upon rate of acquisition. In a second study, increasing postfeedback interval removed that optimum. The feedback precision effects were then replicated in a timing task. The combined results of the 3 studies were interpreted as supportive of an information-processing approach to the study of postfeedback interval events for simple motor skills. The findings additionally supported specific predictions by Bilodeau and deductions from Adams' 1971 theory of motor learning.

  14. The effect of inter-set rest intervals on resistance exercise-induced muscle hypertrophy.

    PubMed

    Henselmans, Menno; Schoenfeld, Brad J

    2014-12-01

    Due to a scarcity of longitudinal trials directly measuring changes in muscle girth, previous recommendations for inter-set rest intervals in resistance training programs designed to stimulate muscular hypertrophy were primarily based on the post-exercise endocrinological response and other mechanisms theoretically related to muscle growth. New research regarding the effects of inter-set rest interval manipulation on resistance training-induced muscular hypertrophy is reviewed here to evaluate current practices and provide directions for future research. Of the studies measuring long-term muscle hypertrophy in groups employing different rest intervals, none have found superior muscle growth in the shorter compared with the longer rest interval group and one study has found the opposite. Rest intervals less than 1 minute can result in acute increases in serum growth hormone levels and these rest intervals also decrease the serum testosterone to cortisol ratio. Long-term adaptations may abate the post-exercise endocrinological response and the relationship between the transient change in hormonal production and chronic muscular hypertrophy is highly contentious and appears to be weak. The relationship between the rest interval-mediated effect on immune system response, muscle damage, metabolic stress, or energy production capacity and muscle hypertrophy is still ambiguous and largely theoretical. In conclusion, the literature does not support the hypothesis that training for muscle hypertrophy requires shorter rest intervals than training for strength development or that predetermined rest intervals are preferable to auto-regulated rest periods in this regard.

  15. A unified approach to the Darwin approximation

    SciTech Connect

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-10-15

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

  16. Assessment of Interval Data and Their Potential Application to Residential Electricity End-Use Modeling, An

    EIA Publications

    2015-01-01

    The Energy Information Administration (EIA) is investigating the potential benefits of incorporating interval electricity data into its residential energy end use models. This includes interval smart meter and submeter data from utility assets and systems. It is expected that these data will play a significant role in informing residential energy efficiency policies in the future. Therefore, a long-term strategy for improving the RECS end-use models will not be complete without an investigation of the current state of affairs of submeter data, including their potential for use in the context of residential building energy modeling.

  17. Collectivity of the pygmy dipole resonance within schematic Tamm-Dancoff approximation and random-phase approximation models

    NASA Astrophysics Data System (ADS)

    Baran, V.; Palade, D. I.; Colonna, M.; Di Toro, M.; Croitoru, A.; Nicolin, A. I.

    2015-05-01

    Within schematic models based on the Tamm-Dancoff approximation and the random-phase approximation with separable interactions, we investigate the physical conditions that may determine the emergence of the pygmy dipole resonance in the E 1 response of atomic nuclei. By introducing a generalization of the Brown-Bolsterli schematic model with a density-dependent particle-hole residual interaction, we find that an additional mode will be affected by the interaction, whose energy centroid is closer to the distance between two major shells and therefore well below the giant dipole resonance (GDR). This state, together with the GDR, exhausts all the transition strength in the Tamm-Dancoff approximation and all the energy-weighted sum rule in the random-phase approximation. Thus, within our scheme, this mode, which could be associated with the pygmy dipole resonance, is of collective nature. By relating the coupling constants appearing in the separable interaction to the symmetry energy value at and below saturation density we explore the role of density dependence of the symmetry energy on the low-energy dipole response.

  18. Best approximation of Gaussian neural networks with nodes uniformly spaced.

    PubMed

    Mulero-Martinez, J I

    2008-02-01

    This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners. PMID:18269959

  19. Non-ideal boson system in the Gaussian approximation

    SciTech Connect

    Tommasini, P.R.; de Toledo Piza, A.F.

    1997-01-01

    We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

  20. Generalized eikonal approximation for strong-field ionization

    NASA Astrophysics Data System (ADS)

    Cajiao Vélez, F.; Krajewska, K.; Kamiński, J. Z.

    2015-05-01

    We develop the eikonal perturbation theory to describe the strong-field ionization by finite laser pulses. This approach in the first order with respect to the binding potential (the so-called generalized eikonal approximation) avoids a singularity at the potential center. Thus, in contrast to the ordinary eikonal approximation, it allows one to treat rescattering phenomena in terms of quantum trajectories. We demonstrate how the first Born approximation and its domain of validity follow from eikonal perturbation theory. Using this approach, we study the coherent interference patterns in photoelectron energy spectra and their modifications induced by the interaction of photoelectrons with the atomic potential. Along with these first results, we discuss the prospects of using the generalized eikonal approximation to study strong-field ionization from multicentered atomic systems and to study other strong-field phenomena.

  1. Approximating the Helium Wavefunction in Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, Joseph; Drachman, Richard J.

    2003-01-01

    In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

  2. An efficient hybrid reliability analysis method with random and interval variables

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2016-09-01

    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  3. Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration

    USGS Publications Warehouse

    Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.

    2009-01-01

    This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.

  4. Collisionless magnetic reconnection under anisotropic MHD approximation

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Kota; Hoshino, Masahiro

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless magneto-hydro-dynamic (MHD) simulations based on the double adiabatic approximation, which is an important step to bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observation. According to our results, a pair of slow shocks does form in the reconnection layer. The resultant shock waves, however, are quite weak compared with those in an isotropic MHD from the point of view of the plasma compression and the amount of the magnetic energy released across the shock. Once the slow shock forms, the downstream plasma are heated in highly anisotropic manner and a firehose-sense (P_{||}>P_{⊥}) pressure anisotropy arises. The maximum anisotropy is limited by the marginal firehose criterion, 1-(P_{||}-P_{⊥})/B(2) =0. In spite of the weakness of the shocks, the resultant reconnection rate is kept at the same level compared with that in the corresponding ordinary MHD simulations. It is also revealed that the sequential order of propagation of the slow shock and the rotational discontinuity, which appears when the guide field component exists, changes depending on the magnitude of the guide field. Especially, when no guide field exists, the rotational discontinuity degenerates with the contact discontinuity remaining at the position of the initial current sheet, while with the slow shock in the isotropic MHD. Our result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  5. Hydration thermodynamics beyond the linear response approximation.

    PubMed

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  6. Hydration thermodynamics beyond the linear response approximation.

    PubMed

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  7. Approximate Green's function methods for HZE transport in multilayered materials

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

  8. Efficient algorithm for approximating one-dimensional ground states

    SciTech Connect

    Aharonov, Dorit; Arad, Itai; Irani, Sandy

    2010-07-15

    The density-matrix renormalization-group method is very effective at finding ground states of one-dimensional (1D) quantum systems in practice, but it is a heuristic method, and there is no known proof for when it works. In this article we describe an efficient classical algorithm which provably finds a good approximation of the ground state of 1D systems under well-defined conditions. More precisely, our algorithm finds a matrix product state of bond dimension D whose energy approximates the minimal energy such states can achieve. The running time is exponential in D, and so the algorithm can be considered tractable even for D, which is logarithmic in the size of the chain. The result also implies trivially that the ground state of any local commuting Hamiltonian in 1D can be approximated efficiently; we improve this to an exact algorithm.

  9. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

  10. Happiness Scale Interval Study. Methodological Considerations

    ERIC Educational Resources Information Center

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…

  11. Duration perception in crossmodally-defined intervals.

    PubMed

    Mayer, Katja M; Di Luca, Massimiliano; Ernst, Marc O

    2014-03-01

    How humans perform duration judgments with multisensory stimuli is an ongoing debate. Here, we investigated how sub-second duration judgments are achieved by asking participants to compare the duration of a continuous sound to the duration of an empty interval in which onset and offset were marked by signals of different modalities using all combinations of visual, auditory and tactile stimuli. The pattern of perceived durations across five stimulus durations (ranging from 100 ms to 900 ms) follows the Vierordt Law. Furthermore, intervals with a sound as onset (audio-visual, audio-tactile) are perceived longer than intervals with a sound as offset. No modality ordering effect is found for visualtactile intervals. To infer whether a single modality-independent or multiple modality-dependent time-keeping mechanisms exist we tested whether perceived duration follows a summative or a multiplicative distortion pattern by fitting a model to all modality combinations and durations. The results confirm that perceived duration depends on sensory latency (summative distortion). Instead, we did not find evidence for multiplicative distortions. The results of the model and the behavioural data support the concept of a single time-keeping mechanism that allows for judgments of durations marked by multisensory stimuli. PMID:23953664

  12. Computation of confidence intervals for Poisson processes

    NASA Astrophysics Data System (ADS)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  13. Confidence Trick: The Interpretation of Confidence Intervals

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    The frequent misinterpretation of the nature of confidence intervals by students has been well documented. This article examines the problem as an aspect of the learning of mathematical definitions and considers the tension between parroting mathematically rigorous, but essentially uninternalized, statements on the one hand and expressing…

  14. Interval scanning photomicrography of microbial cell populations.

    NASA Technical Reports Server (NTRS)

    Casida, L. E., Jr.

    1972-01-01

    A single reproducible area of the preparation in a fixed focal plane is photographically scanned at intervals during incubation. The procedure can be used for evaluating the aerobic or anaerobic growth of many microbial cells simultaneously within a population. In addition, the microscope is not restricted to the viewing of any one microculture preparation, since the slide cultures are incubated separately from the microscope.

  15. Equidistant Intervals in Perspective Photographs and Paintings

    PubMed Central

    2016-01-01

    Human vision is extremely sensitive to equidistance of spatial intervals in the frontal plane. Thresholds for spatial equidistance have been extensively measured in bisecting tasks. Despite the vast number of studies, the informational basis for equidistance perception is unknown. There are three possible sources of information for spatial equidistance in pictures, namely, distances in the picture plane, in physical space, and visual space. For each source, equidistant intervals were computed for perspective photographs of walls and canals. Intervals appear equidistant if equidistance is defined in visual space. Equidistance was further investigated in paintings of perspective scenes. In appraisals of the perspective skill of painters, emphasis has been on accurate use of vanishing points. The current study investigated the skill of painters to depict equidistant intervals. Depicted rows of equidistant columns, tiles, tapestries, or trees were analyzed in 30 paintings and engravings. Computational analysis shows that from the middle ages until now, artists either represented equidistance in physical space or in a visual space of very limited depth. Among the painters and engravers who depict equidistance in a highly nonveridical visual space are renowned experts of linear perspective.

  16. Equidistant Intervals in Perspective Photographs and Paintings

    PubMed Central

    2016-01-01

    Human vision is extremely sensitive to equidistance of spatial intervals in the frontal plane. Thresholds for spatial equidistance have been extensively measured in bisecting tasks. Despite the vast number of studies, the informational basis for equidistance perception is unknown. There are three possible sources of information for spatial equidistance in pictures, namely, distances in the picture plane, in physical space, and visual space. For each source, equidistant intervals were computed for perspective photographs of walls and canals. Intervals appear equidistant if equidistance is defined in visual space. Equidistance was further investigated in paintings of perspective scenes. In appraisals of the perspective skill of painters, emphasis has been on accurate use of vanishing points. The current study investigated the skill of painters to depict equidistant intervals. Depicted rows of equidistant columns, tiles, tapestries, or trees were analyzed in 30 paintings and engravings. Computational analysis shows that from the middle ages until now, artists either represented equidistance in physical space or in a visual space of very limited depth. Among the painters and engravers who depict equidistance in a highly nonveridical visual space are renowned experts of linear perspective. PMID:27698983

  17. Precise Interval Timer for Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey (Inventor)

    2014-01-01

    A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

  18. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  19. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  20. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  1. MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION

    EPA Science Inventory

    Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...

  2. Dynamical Vertex Approximation for the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Toschi, Alessandro

    A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.

  3. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    SciTech Connect

    Hirabayashi, K.; Hoshino, M.

    2013-11-15

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  4. Approximate Approaches to the One-Dimensional Finite Potential Well

    ERIC Educational Resources Information Center

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

    2011-01-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

  5. Analytic Approximations for the Extrapolation of Lattice Data

    SciTech Connect

    Masjuan, Pere

    2010-12-22

    We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.

  6. Uniform approximation of partial sums of a Dirichlet series by shorter sums and {Phi}-widths

    SciTech Connect

    Bourgain, Jean; Kashin, Boris S

    2012-12-31

    It is shown that each Dirichlet polynomial P of degree N which is bounded in a certain natural Euclidean norm, admits a nontrivial uniform approximation on the corresponding interval on the real axis by a Dirichlet polynomial with spectrum containing significantly fewer than N elements. Moreover, this spectrum is independent of P. Bibliography: 19 titles.

  7. Inference by eye: reading the overlap of independent confidence intervals.

    PubMed

    Cumming, Geoff

    2009-01-30

    When 95 per cent confidence intervals (CIs) on independent means do not overlap, the two-tailed p-value is less than 0.05 and there is a statistically significant difference between the means. However, p for non-overlapping 95 per cent CIs is actually considerably smaller than 0.05: If the two CIs just touch, p is about 0.01, and the intervals can overlap by as much as about half the length of one CI arm before p becomes as large as 0.05. Keeping in mind this rule-that overlap of half the length of one arm corresponds approximately to statistical significance at p = 0.05-can be helpful for a quick appreciation of figures that display CIs, especially if precise p-values are not reported. The author investigated the robustness of this and similar rules, and found them sufficiently accurate when sample sizes are at least 10, and the two intervals do not differ in width by more than a factor of 2. The author reviewed previous discussions of CI overlap and extended the investigation to p-values other than 0.05 and 0.01. He also studied 95 per cent CIs on two proportions, and on two Pearson correlations, and found similar rules apply to overlap of these asymmetric CIs, for a very broad range of cases. Wider use of figures with 95 per cent CIs is desirable, and these rules may assist easy and appropriate understanding of such figures.

  8. Interval Throwing and Hitting Programs in Baseball: Biomechanics and Rehabilitation.

    PubMed

    Chang, Edward S; Bishop, Meghan E; Baker, Dylan; West, Robin V

    2016-01-01

    Baseball injuries from throwing and hitting generally occur as a consequence of the repetitive and high-energy motions inherent to the sport. Biomechanical studies have contributed to understanding the pathomechanics leading to injury and to the development of rehabilitation programs. Interval-based throwing and hitting programs are designed to return an athlete to competition through a gradual progression of sport-specific exercises. Proper warm-up and strict adherence to the program allows the athlete to return as quickly and safely as possible.

  9. Interval Throwing and Hitting Programs in Baseball: Biomechanics and Rehabilitation.

    PubMed

    Chang, Edward S; Bishop, Meghan E; Baker, Dylan; West, Robin V

    2016-01-01

    Baseball injuries from throwing and hitting generally occur as a consequence of the repetitive and high-energy motions inherent to the sport. Biomechanical studies have contributed to understanding the pathomechanics leading to injury and to the development of rehabilitation programs. Interval-based throwing and hitting programs are designed to return an athlete to competition through a gradual progression of sport-specific exercises. Proper warm-up and strict adherence to the program allows the athlete to return as quickly and safely as possible. PMID:26991569

  10. Concurrent random-interval schedules and the matching law.

    PubMed

    Rodewald, H K

    1978-11-01

    In Experiment I, a group of eight pigeons performed on concurrent random-interval schedules constructed by holding probability equal and varying cycle time to produce ratios of reinforcer densities of 1:1, 3:1, and 5:1 for key pecking. Schedules for a second group of seven were constructed with equal cycle times and unequal probabilities. Both groups deviated from simple matching, but the two forms of the schedules appeared to produce no consistent patterns of deviation. The data were found to be consistent with those obtained in concurrent variable-interval situations. The parameters of the matching equation in the form of Y=k X(a) were estimated; the value of k was unity and a was 0.84. In Experiment II, six pigeons were exposed to two conc RI RI schedules in which one component increasingly approximated an FI schedule. The value of k was not 1.0. Concurrent RI RI schedules were shown to represent a continuum from conc FI VI to conc VI VI schedules. The use of the exponential equation in testing "matching laws" suggests that a<1 will continue to be observed, and this will set limits on the form of new laws and the assumed or rational values of the component variables in these laws.

  11. A prospective evaluation of non-interval- and interval-based exercise training progressions in rodents.

    PubMed

    Jendzjowsky, Nicholas G; DeLorey, Darren S

    2011-10-01

    Non-interval and interval training progressions were used to determine (i) the mean rate at which treadmill speed could be incremented daily using a non-interval training progression to train rats to run continuously at different intensities and (ii) the number of training days required for rats to run continuously at different exercise intensities with non-interval- and interval-based training progressions to establish methods of progressive overload for rodent exercise training studies. Rats were randomly assigned to mild-intensity (n = 5, 20 m·min(-1), 5% grade), moderate-intensity (n = 5, 30 m·min(-1), 5% grade), and heavy-intensity non-interval groups (n = 5, 40 m·min(-1), 5% grade) or a heavy-intensity interval (n = 5, 40 m·min(-1), 5% grade) group and ran 5 days·week(-1) for 6 weeks. Non-interval training involved a daily increase of treadmill speed, whereas interval training involved a daily increase of interval time, until the animal could run continuously at a prescribed intensity. In mild-, moderate-, and heavy-intensity non-interval-trained rats, treadmill speed was increased by 0.6 ± 0.7 m·min(-1)·day(-1), 0.6 ± 0.2 m·min(-1)·day(-1), and 0.8 ± 0.1 m·min(-1)·day(-1), respectively. Target training intensity and duration were obtained following 0.4 ± 0.5 days, 17 ± 3 days, and 23 ± 3 training days (p < 0.05) in mild-, moderate-, and heavy-intensity groups, respectively. In contrast, interval-trained rodents required 11 ± 1 training days. These data demonstrate that rodents will tolerate an increase in treadmill speed of ∼0.7 ± 0.1 m·min(-1)·day(-1) and that this progression enables rats to run continuously at moderate and heavy intensities with 3-4 weeks of progressive overload. Interval training significantly reduces the number of training days required to attain a target intensity.

  12. Approximate Analysis of Semiconductor Laser Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, William K.; Katz, Joseph

    1987-01-01

    Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

  13. Bent approximations to synchrotron radiation optics

    SciTech Connect

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.

  14. Magnetoelectric charge states of matter-energy. A second approximation. Part VII. Diffuse relativistic superconductive plasma. Measurable and non-measurable physical manifestations. Kirlian photography. Laser phenomena. Cosmic effects on chemical and biological systems.

    PubMed

    Cope, F W

    1980-01-01

    Experimental evidence suggests that all objects, and especially living objects, contain and are surrounded by diffuse clouds of matter-energy probably best considered as a superconductive plasma state and best analyzed by application of an extended form of the Einstein special theory of relativity. Such a plasma state would have physical properties that for relativistic reasons the experimentalists could not expect to measure, and also those he could expect to measure. Not possible to measure should be (a) absorption or reflection of light, (b) electric charge mobilities of Hall effects, and (c) any particulate structure within the plasma. Possible to measure should be (a) channel formation ("arcing") in high applied electric fields (e.g., as in Kirlian photography), (b) effects of the plasma on temperatures and potentials of electrons in solid objects moving through that plasma, (c) facilitation of coupling between electromagnetic oscillations in sets of adjacent molecules, resulting in facilitation of laser and maser emissions of electromagnetic waves and in facilitation of geometrical alignment of adjacent molecules, and (d) magnetic and electric flux trapping with resultant magnetic and/or electric dipole moments. Experimental evidence suggests that diffuse superconductive plasma may reach the earth from the sun, resulting in diurnal and seasonal fluctuations in rates of antigen-antibody reactions as well as in rates of precipitation and crystallization of solids from solutions.

  15. Magnetoelectric charge states of matter-energy. A second approximation. Part VII. Diffuse relativistic superconductive plasma. Measurable and non-measurable physical manifestations. Kirlian photography. Laser phenomena. Cosmic effects on chemical and biological systems.

    PubMed

    Cope, F W

    1980-01-01

    Experimental evidence suggests that all objects, and especially living objects, contain and are surrounded by diffuse clouds of matter-energy probably best considered as a superconductive plasma state and best analyzed by application of an extended form of the Einstein special theory of relativity. Such a plasma state would have physical properties that for relativistic reasons the experimentalists could not expect to measure, and also those he could expect to measure. Not possible to measure should be (a) absorption or reflection of light, (b) electric charge mobilities of Hall effects, and (c) any particulate structure within the plasma. Possible to measure should be (a) channel formation ("arcing") in high applied electric fields (e.g., as in Kirlian photography), (b) effects of the plasma on temperatures and potentials of electrons in solid objects moving through that plasma, (c) facilitation of coupling between electromagnetic oscillations in sets of adjacent molecules, resulting in facilitation of laser and maser emissions of electromagnetic waves and in facilitation of geometrical alignment of adjacent molecules, and (d) magnetic and electric flux trapping with resultant magnetic and/or electric dipole moments. Experimental evidence suggests that diffuse superconductive plasma may reach the earth from the sun, resulting in diurnal and seasonal fluctuations in rates of antigen-antibody reactions as well as in rates of precipitation and crystallization of solids from solutions. PMID:7454856

  16. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  17. Thermal effects and sudden decay approximation in the curvaton scenario

    SciTech Connect

    Kitajima, Naoya; Takesako, Tomohiro; Yokoyama, Shuichiro; Langlois, David; Takahashi, Tomo E-mail: langlois@apc.univ-paris7.fr E-mail: takesako@icrr.u-tokyo.ac.jp

    2014-10-01

    We study the impact of a temperature-dependent curvaton decay rate on the primordial curvature perturbation generated in the curvaton scenario. Using the familiar sudden decay approximation, we obtain an analytical expression for the curvature perturbation after the decay of the curvaton. We then investigate numerically the evolution of the background and of the perturbations during the decay. We first show that the instantaneous transfer coefficient, related to the curvaton energy fraction at the decay, can be extended into a more general parameter, which depends on the net transfer of the curvaton energy into radiation energy or, equivalently, on the total entropy ratio after the complete curvaton decay. We then compute the curvature perturbation and compare this result with the sudden decay approximation prediction.

  18. Constraint-based Attribute and Interval Planning

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  19. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  20. Temporal control in fixed-interval schedules.

    PubMed

    Zeiler, M D; Powell, D G

    1994-01-01

    The peak procedure was used to study temporal control in pigeons exposed to seven fixed-interval schedules ranging from 7.5 to 480 s. The focus was on behavior in individual intervals. Quantitative properties of temporal control depended on whether the aspect of behavior considered was initial pause duration, the point of maximum acceleration in responding, the point of maximum deceleration, the point at which responding stopped, or several different statistical derivations of a point of maximum responding. Each aspect produced different conclusions about the nature of temporal control, and none conformed to what was known previously about the way ongoing responding was controlled by time under conditions of differential reinforcement. Existing theory does not explain why Weber's law so rarely fit the results or why each type of behavior seemed unique. These data fit with others suggesting that principles of temporal control may depend on the role played by the particular aspect of behavior in particular situations.

  1. The Rotator Interval of the Shoulder

    PubMed Central

    Frank, Rachel M.; Taylor, Dean; Verma, Nikhil N.; Romeo, Anthony A.; Mologne, Timothy S.; Provencher, Matthew T.

    2015-01-01

    Biomechanical studies have shown that repair or plication of rotator interval (RI) ligamentous and capsular structures decreases glenohumeral joint laxity in various directions. Clinical outcomes studies have reported successful outcomes after repair or plication of these structures in patients undergoing shoulder stabilization procedures. Recent studies describing arthroscopic techniques to address these structures have intensified the debate over the potential benefit of these procedures as well as highlighted the differences between open and arthroscopic RI procedures. The purposes of this study were to review the structures of the RI and their contribution to shoulder instability, to discuss the biomechanical and clinical effects of repair or plication of rotator interval structures, and to describe the various surgical techniques used for these procedures and outcomes. PMID:26779554

  2. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  3. Systolic Time Intervals and New Measurement Methods.

    PubMed

    Tavakolian, Kouhyar

    2016-06-01

    Systolic time intervals have been used to detect and quantify the directional changes of left ventricular function. New methods of recording these cardiac timings, which are less cumbersome, have been recently developed and this has created a renewed interest and novel applications for these cardiac timings. This manuscript reviews these new methods and addresses the potential for the application of these cardiac timings for the diagnosis and prognosis of different cardiac diseases.

  4. Psychophysical basis for consonant musical intervals

    NASA Astrophysics Data System (ADS)

    Resnick, L.

    1981-06-01

    A suggestion is made to explain the acceptance of certain musical intervals as consonant and others as dissonant. The proposed explanation involves the relation between the time required to perceive a definite pitch and the period of a complex tone. If the former time is greater than the latter, the tone is consonant; otherwise it is dissonant. A quantitative examination leads to agreement with empirical data.

  5. Quantifying chaotic dynamics from interspike intervals

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Shihalov, G. M.

    2015-03-01

    We address the problem of characterization of chaotic dynamics at the input of a threshold device described by an integrate-and-fire (IF) or a threshold crossing (TC) model from the output sequences of interspike intervals (ISIs). We consider the conditions under which quite short sequences of spiking events provide correct identification of the dynamical regime characterized by the single positive Lyapunov exponent (LE). We discuss features of detecting the second LE for both types of the considered models of events generation.

  6. Revised Thomas-Fermi approximation for singular potentials

    NASA Astrophysics Data System (ADS)

    Dufty, James W.; Trickey, S. B.

    2016-08-01

    Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.

  7. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-01

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy. PMID:26589009

  8. Novel bivariate moment-closure approximations.

    PubMed

    Krishnarajah, Isthrinayagy; Marion, Glenn; Gibson, Gavin

    2007-08-01

    Nonlinear stochastic models are typically intractable to analytic solutions and hence, moment-closure schemes are used to provide approximations to these models. Existing closure approximations are often unable to describe transient aspects caused by extinction behaviour in a stochastic process. Recent work has tackled this problem in the univariate case. In this study, we address this problem by introducing novel bivariate moment-closure methods based on mixture distributions. Novel closure approximations are developed, based on the beta-binomial, zero-modified distributions and the log-Normal, designed to capture the behaviour of the stochastic SIS model with varying population size, around the threshold between persistence and extinction of disease. The idea of conditional dependence between variables of interest underlies these mixture approximations. In the first approximation, we assume that the distribution of infectives (I) conditional on population size (N) is governed by the beta-binomial and for the second form, we assume that I is governed by zero-modified beta-binomial distribution where in either case N follows a log-Normal distribution. We analyse the impact of coupling and inter-dependency between population variables on the behaviour of the approximations developed. Thus, the approximations are applied in two situations in the case of the SIS model where: (1) the death rate is independent of disease status; and (2) the death rate is disease-dependent. Comparison with simulation shows that these mixture approximations are able to predict disease extinction behaviour and describe transient aspects of the process.

  9. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  10. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  11. Diagonal Pade approximations for initial value problems

    SciTech Connect

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

  12. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  13. Hyperplane arrangements, interval orders, and trees.

    PubMed Central

    Stanley, R P

    1996-01-01

    A hyperplane arrangement is a finite set of hyperplanes in a real affine space. An especially important arrangement is the braid arrangement, which is the set of all hyperplanes xi - xj = 1, 1 interval orders and with the enumeration of trees. For instance, the number of labeled interval orders that can be obtained from n intervals I1,..., In of generic lengths is counted. There is also discussed an arrangement due to N. Linial whose number of regions is the number of alternating (or intransitive) trees, as defined by Gelfand, Graev, and Postnikov [Gelfand, I. M., Graev, M. I., and Postnikov, A. (1995), preprint]. Finally, a refinement is given, related to counting labeled trees by number of inversions, of a result of Shi [Shi, J.-Y. (1986), Lecture Notes in Mathematics, no. 1179, Springer-Verlag] that a certain deformation of the braid arrangement has (n + 1)n-1 regions. PMID:11607643

  14. Compressibility Corrections to Closure Approximations for Turbulent Flow Simulations

    SciTech Connect

    Cloutman, L D

    2003-02-01

    We summarize some modifications to the usual closure approximations for statistical models of turbulence that are necessary for use with compressible fluids at all Mach numbers. We concentrate here on the gradient-flu approximation for the turbulent heat flux, on the buoyancy production of turbulence kinetic energy, and on a modification of the Smagorinsky model to include buoyancy. In all cases, there are pressure gradient terms that do not appear in the incompressible models and are usually omitted in compressible-flow models. Omission of these terms allows unphysical rates of entropy change.

  15. Observation and Structure Determination of an Oxide Quasicrystal Approximant

    NASA Astrophysics Data System (ADS)

    Förster, S.; Trautmann, M.; Roy, S.; Adeagbo, W. A.; Zollner, E. M.; Hammer, R.; Schumann, F. O.; Meinel, K.; Nayak, S. K.; Mohseni, K.; Hergert, W.; Meyerheim, H. L.; Widdra, W.

    2016-08-01

    We report on the first observation of an approximant structure to the recently discovered two-dimensional oxide quasicrystal. Using scanning tunneling microscopy, low-energy electron diffraction, and surface x-ray diffraction in combination with ab initio calculations, the atomic structure and the bonding scheme are determined. The oxide approximant follows a 32 .4.3.4 Archimedean tiling. Ti atoms reside at the corners of each tiling element and are threefold coordinated to oxygen atoms. Ba atoms separate the TiO3 clusters, leading to a fundamental edge length of the tiling 6.7 Å.

  16. Observation and Structure Determination of an Oxide Quasicrystal Approximant.

    PubMed

    Förster, S; Trautmann, M; Roy, S; Adeagbo, W A; Zollner, E M; Hammer, R; Schumann, F O; Meinel, K; Nayak, S K; Mohseni, K; Hergert, W; Meyerheim, H L; Widdra, W

    2016-08-26

    We report on the first observation of an approximant structure to the recently discovered two-dimensional oxide quasicrystal. Using scanning tunneling microscopy, low-energy electron diffraction, and surface x-ray diffraction in combination with ab initio calculations, the atomic structure and the bonding scheme are determined. The oxide approximant follows a 3^{2}.4.3.4 Archimedean tiling. Ti atoms reside at the corners of each tiling element and are threefold coordinated to oxygen atoms. Ba atoms separate the TiO_{3} clusters, leading to a fundamental edge length of the tiling 6.7 Å. PMID:27610863

  17. Superfluidity of heated Fermi systems in the static fluctuation approximation

    SciTech Connect

    Khamzin, A. A.; Nikitin, A. S.; Sitdikov, A. S.

    2015-10-15

    Superfluidity properties of heated finite Fermi systems are studied in the static fluctuation approximation, which is an original method. This method relies on a single and controlled approximation, which permits taking correctly into account quasiparticle correlations and thereby going beyond the independent-quasiparticle model. A closed self-consistent set of equations for calculating correlation functions at finite temperature is obtained for a finite Fermi system described by the Bardeen–Cooper–Schrieffer Hamiltonian. An equation for the energy gap is found with allowance for fluctuation effects. It is shown that the phase transition to the supefluid state is smeared upon the inclusion of fluctuations.

  18. An approximate model for pulsar navigation simulation

    NASA Astrophysics Data System (ADS)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  19. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  20. Approximate Shortest Path Queries Using Voronoi Duals

    NASA Astrophysics Data System (ADS)

    Honiden, Shinichi; Houle, Michael E.; Sommer, Christian; Wolff, Martin

    We propose an approximation method to answer point-to-point shortest path queries in undirected edge-weighted graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results. The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.

  1. Nth-order flat approximation of the signum function by a polynomial

    NASA Technical Reports Server (NTRS)

    Hosenthien, H. H.

    1972-01-01

    In the interval studied, the signum function, sgn x, was demonstrated to be uniquely approximated by an odd polynomial f sub n (x) of order 2n-1, for which the approximation is nth order flat with respect to the points (1,1) and (-1,-1). A theorem was proved which states that for even integers n or = 2, the approximating polynomial has a pair of nonzero real roots + or - x sub n such that the x sub n form a monotonically decreasing sequence which converges to the root of 2 as n approaches infinity. For odd n i, f sub n (x) represents a strictly increasing monotonic function for all real x. As n tends to infinity, f sub n (x) converges to sgn x uniformly in two interval ranges.

  2. Probable detection of solar neutrons by ground-level neutron monitors during STIP interval 16

    NASA Technical Reports Server (NTRS)

    Shea, M. A.; Smart, D. F.; Flueckiger, E. O.

    1987-01-01

    The third solar neutron event detected by Earth-orbiting spacecraft was observed during STIP Interval XVI. The solar flare beginning at 2356 UT on 24 April l984 produced a variety of emissions including gamma rays and solar neutrons. The neutrons were observed by the SMM satellite and the neutron-decay protons were observed on the ISEE-3 spacecraft. Between 0000 and 0010 UT on 25 April an increase of 0.7 and 1.7 percent was recorded by neutron monitors at Tokyo (Itabashi) and Morioka, Japan. These stations were located about 42 degrees from the sub-solar point, and consequently, these is approximately 1400 grams of atmosphere between the incident neutrons at the top of the atmosphere and their detection on the Earth's surface. Nevertheless, the time coincidence of a small increase in the total counting rate of two independent neutron monitors indicates the presence of solar neutrons with energies greater than 400 MeV at the top of the Earth's atmosphere. The small increases in the counting rate emphasize the difficulty in identifying similar events using historical neutron monitor data.

  3. Hydration thermodynamics beyond the linear response approximation

    NASA Astrophysics Data System (ADS)

    Raineri, Fernando O.

    2016-10-01

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, {{\\Psi}\\text{A}} and {{\\Psi}\\text{B}} , with the solvent environment. Throughout the A \\to B transformation of the solute, the solvation system is described by a Hamiltonian H≤ft(ξ \\right) that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density {{\\wp}ξ}≤ft( y\\right) that the dimensionless perturbational solute-solvent interaction energy Y=β ≤ft({{\\Psi}\\text{B}}-{{\\Psi}\\text{A}}\\right) has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both {{\\wp}ξ} ≤ft( y\\right) and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density {{\\wp}ξ} ≤ft( y\\right) . The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in

  4. Further results on the L1 analysis of sampled-data systems via kernel approximation approach

    NASA Astrophysics Data System (ADS)

    Kim, Jung Hoon; Hagiwara, Tomomichi

    2016-08-01

    This paper gives two methods for the L1 analysis of sampled-data systems, by which we mean computing the L∞-induced norm of sampled-data systems. This is achieved by developing what we call the kernel approximation approach in the setting of sampled-data systems. We first consider the lifting treatment of sampled-data systems and give an operator theoretic representation of their input/output relation. We further apply the fast-lifting technique by which the sampling interval [0, h) is divided into M subintervals with an equal width, and provide methods for computing the L∞-induced norm. In contrast to a similar approach developed earlier called the input approximation approach, we use an idea of kernel approximation, in which the kernel function of an input operator and the hold function of an output operator are approximated by piecewise constant or piecewise linear functions. Furthermore, it is shown that the approximation errors in the piecewise constant approximation or piecewise linear approximation scheme converge to 0 at the rate of 1/M or 1/M2, respectively. In comparison with the existing input approximation approach, in which the input function (rather than the kernel function) of the input operator is approximated by piecewise constant or piecewise linear functions, we show that the kernel approximation approach gives improved computation results. More precisely, even though the convergence rates in the kernel approximation approach remain qualitatively the same as those in the input approximation approach, the newly developed former approach could lead to quantitatively improved approximation errors than the latter approach particularly when the piecewise linear approximation scheme is taken. Finally, a numerical example is given to demonstrate the effectiveness of the kernel approximation approach with this scheme.

  5. Intensity approximation of random fluctuation in complex systems

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.; Yulmetyeva, D. G.; Emeljanova, N. A.

    2002-01-01

    The Markov and non-Markov processes in complex systems are examined with the help of dynamical information Shannon entropy method. Here we consider the essential role of two mutually independent channels of entropy involving creation of correlation and annihilation of correlation. The developed method has been used to analyze the intensity fluctuation of the complex systems of various nature: in psychology (to analyze numerical and pattern short-time human memory, to study the effect of stress on the parameters of the dynamical taping-test) and in cardiology (to analyze the random dynamics of RR-intervals in human ECG's and to diagnose various diseases of human cardiovascular systems). The received results show that the application of intensity approximation allows to improve essentially the diagnostics of parameters in the evolution of human dynamic states.

  6. Probabilistic flood forecast: Exact and approximate predictive distributions

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman

    2014-09-01

    For quantification of predictive uncertainty at the forecast time t0, the future hydrograph is viewed as a discrete-time continuous-state stochastic process {Hn: n=1,…,N}, where Hn is the river stage at time instance tn>t0. The probabilistic flood forecast (PFF) should specify a sequence of exceedance functions {F‾n: n=1,…,N} such that F‾n(h)=P(Zn>h), where P stands for probability, and Zn is the maximum river stage within time interval (t0,tn], practically Zn=max{H1,…,Hn}. This article presents a method for deriving the exact PFF from a probabilistic stage transition forecast (PSTF) produced by the Bayesian forecasting system (BFS). It then recalls (i) the bounds on F‾n, which can be derived cheaply from a probabilistic river stage forecast (PRSF) produced by a simpler version of the BFS, and (ii) an approximation to F‾n, which can be constructed from the bounds via a recursive linear interpolator (RLI) without information about the stochastic dependence in the process {H1,…,Hn}, as this information is not provided by the PRSF. The RLI is substantiated by comparing the approximate PFF against the exact PFF. Being reasonably accurate and very simple, the RLI may be attractive for real-time flood forecasting in systems of lesser complexity. All methods are illustrated with a case study for a 1430 km headwater basin wherein the PFF is produced for a 72-h interval discretized into 6-h steps.

  7. Confidence intervals for a random-effects meta-analysis based on Bartlett-type corrections.

    PubMed

    Noma, Hisashi

    2011-12-10

    In medical meta-analysis, the DerSimonian-Laird confidence interval for the average treatment effect has been widely adopted in practice. However, it is well known that its coverage probability (the probability that the interval actually includes the true value) can be substantially below the target level. One particular reason is that the validity of the confidence interval depends on the assumption that the number of synthesized studies is sufficiently large. In typical medical meta-analyses, the number of studies is fewer than 20. In this article, we developed three confidence intervals for improving coverage properties, based on (i) the Bartlett corrected likelihood ratio statistic, (ii) the efficient score statistic, and (iii) the Bartlett-type adjusted efficient score statistic. The Bartlett and Bartlett-type corrections improve the large sample approximations for the likelihood ratio and efficient score statistics. Through numerical evaluations by simulations, these confidence intervals demonstrated better coverage properties than the existing methods. In particular, with a moderate number of synthesized studies, the Bartlett and Bartlett-type corrected confidence intervals performed well. An application to a meta-analysis of the treatment for myocardial infarction with intravenous magnesium is presented.

  8. Examining the exobase approximation: DSMC models of Titan's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.

    2015-12-01

    Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.

  9. Easy identification of generalized common and conserved nested intervals.

    PubMed

    de Montgolfier, Fabien; Raffinot, Mathieu; Rusu, Irena

    2014-07-01

    In this article we explain how to easily compute gene clusters, formalized by classical or generalized nested common or conserved intervals, between a set of K genomes represented as K permutations. A b-nested common (resp. conserved) interval I of size |I| is either an interval of size 1 or a common (resp. conserved) interval that contains another b-nested common (resp. conserved) interval of size at least |I|-b. When b=1, this corresponds to the classical notion of nested interval. We exhibit two simple algorithms to output all b-nested common or conserved intervals between K permutations in O(Kn+nocc) time, where nocc is the total number of such intervals. We also explain how to count all b-nested intervals in O(Kn) time. New properties of the family of conserved intervals are proposed to do so.

  10. Easy identification of generalized common and conserved nested intervals.

    PubMed

    de Montgolfier, Fabien; Raffinot, Mathieu; Rusu, Irena

    2014-07-01

    In this article we explain how to easily compute gene clusters, formalized by classical or generalized nested common or conserved intervals, between a set of K genomes represented as K permutations. A b-nested common (resp. conserved) interval I of size |I| is either an interval of size 1 or a common (resp. conserved) interval that contains another b-nested common (resp. conserved) interval of size at least |I|-b. When b=1, this corresponds to the classical notion of nested interval. We exhibit two simple algorithms to output all b-nested common or conserved intervals between K permutations in O(Kn+nocc) time, where nocc is the total number of such intervals. We also explain how to count all b-nested intervals in O(Kn) time. New properties of the family of conserved intervals are proposed to do so. PMID:24650221

  11. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  12. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  13. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  14. Algebraic approximations for transcendental equations with applications in nanophysics

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2015-09-01

    Using algebraic approximations of trigonometric or hyperbolic functions, a class of transcendental equations can be transformed in tractable, algebraic equations. Studying transcendental equations this way gives the eigenvalues of Sturm-Liouville problems associated to wave equation, mainly to Schroedinger equation; these algebraic approximations provide approximate analytical expressions for the energy of electrons and phonons in quantum wells, quantum dots (QDs) and quantum wires, in the frame of one-particle models of such systems. The advantage of this approach, compared to the numerical calculations, is that the final result preserves the functional dependence on the physical parameters of the problem. The errors of this method, situated between some few percentages and ?, are carefully analysed. Several applications, for quantum wells, QDs and quantum wires, are presented.

  15. Polynomial approximations of a class of stochastic multiscale elasticity problems

    NASA Astrophysics Data System (ADS)

    Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing

    2016-06-01

    We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together

  16. Computing confidence intervals for standardized regression coefficients.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2013-12-01

    With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations--for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX--the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients.

  17. ENERGY RELAXATION OF HELIUM ATOMS IN ASTROPHYSICAL GASES

    SciTech Connect

    Lewkow, N. R.; Kharchenko, V.; Zhang, P.

    2012-09-01

    We report accurate parameters describing energy relaxation of He atoms in atomic gases, important for astrophysics and atmospheric science. Collisional energy exchange between helium atoms and atomic constituents of the interstellar gas, heliosphere, and upper planetary atmosphere has been investigated. Energy transfer rates, number of collisions required for thermalization, energy distributions of recoil atoms, and other major parameters of energy relaxation for fast He atoms in thermal H, He, and O gases have been computed in a broad interval of energies from 10 meV to 10 keV. This energy interval is important for astrophysical applications involving the energy deposition of energetic atoms and ions into atmospheres of planets and exoplanets, atmospheric evolution, and analysis of non-equilibrium processes in the interstellar gas and heliosphere. Angular- and energy-dependent cross sections, required for an accurate description of the momentum-energy transfer, are obtained using ab initio interaction potentials and quantum mechanical calculations for scattering processes. Calculation methods used include partial wave analysis for collisional energies below 2 keV and the eikonal approximation at energies higher than 100 eV, keeping a significant energy region of overlap, 0.1-2 keV, between these two methods for their mutual verification. The partial wave method and the eikonal approximation excellently match results obtained with each other as well as experimental data, providing reliable cross sections in the astrophysically important interval of energies from 10 meV to 10 keV. Analytical formulae, interpolating obtained energy- and angular-dependent cross sections, are presented to simplify potential applications of the reported database. Thermalization of fast He atoms in the interstellar gas and energy relaxation of hot He and O atoms in the upper atmosphere of Mars are considered as illustrative examples of potential applications of the new database.

  18. Adiabatic approximation for the density matrix

    NASA Astrophysics Data System (ADS)

    Band, Yehuda B.

    1992-05-01

    An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

  19. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  20. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  1. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  2. A Survey of Techniques for Approximate Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  3. Feedback functions for variable-interval reinforcement

    PubMed Central

    Nevin, John A.; Baum, William M.

    1980-01-01

    On a given variable-interval schedule, the average obtained rate of reinforcement depends on the average rate of responding. An expression for this feedback effect is derived from the assumptions that free-operant responding occurs in bursts with a constant tempo, alternating with periods of engagement in other activities; that the durations of bursts and other activities are exponentially distributed; and that the rates of initiating and terminating bursts are inversely related. The expression provides a satisfactory account of the data of three experiments. PMID:16812187

  4. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  5. Introduction to the Maxwell Garnett approximation: tutorial.

    PubMed

    Markel, Vadim A

    2016-07-01

    This tutorial is devoted to the Maxwell Garnett approximation and related theories. Topics covered in this first, introductory part of the tutorial include the Lorentz local field correction, the Clausius-Mossotti relation and its role in the modern numerical technique known as the discrete dipole approximation, the Maxwell Garnett mixing formula for isotropic and anisotropic media, multicomponent mixtures and the Bruggeman equation, the concept of smooth field, and Wiener and Bergman-Milton bounds. PMID:27409680

  6. The Actinide Transition Revisited by Gutzwiller Approximation

    NASA Astrophysics Data System (ADS)

    Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel

    2015-03-01

    We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.

  7. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  8. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  9. Approximate Solutions Of Equations Of Steady Diffusion

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1992-01-01

    Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

  10. Temperature dependence of electronic eigenenergies in the adiabatic harmonic approximation

    NASA Astrophysics Data System (ADS)

    Poncé, S.; Antonius, G.; Gillet, Y.; Boulanger, P.; Laflamme Janssen, J.; Marini, A.; Côté, M.; Gonze, X.

    2014-12-01

    The renormalization of electronic eigenenergies due to electron-phonon interactions (temperature dependence and zero-point motion effect) is important in many materials. We address it in the adiabatic harmonic approximation, based on first principles (e.g., density-functional theory), from different points of view: directly from atomic position fluctuations or, alternatively, from Janak's theorem generalized to the case where the Helmholtz free energy, including the vibrational entropy, is used. We prove their equivalence, based on the usual form of Janak's theorem and on the dynamical equation. We then also place the Allen-Heine-Cardona (AHC) theory of the renormalization in a first-principles context. The AHC theory relies on the rigid-ion approximation, and naturally leads to a self-energy (Fan) contribution and a Debye-Waller contribution. Such a splitting can also be done for the complete harmonic adiabatic expression, in which the rigid-ion approximation is not required. A numerical study within the density-functional perturbation theory framework allows us to compare the AHC theory with frozen-phonon calculations, with or without the rigid-ion approximation. For the two different numerical approaches without non-rigid-ion terms, the agreement is better than 7 μ eV in the case of diamond, which represent an agreement to five significant digits. The magnitude of the non-rigid-ion terms in this case is also presented, distinguishing specific phonon modes contributions to different electronic eigenenergies.

  11. A quantum relaxation-time approximation for finite fermion systems

    SciTech Connect

    Reinhard, P.-G.; Suraud, E.

    2015-03-15

    We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.

  12. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft

  13. The Verification of Influence of the Point "C" Position from Given Interval to Solving Systems with Highspeed Feedback

    NASA Astrophysics Data System (ADS)

    Bajčičáková, Ingrida; Jurovatá, Dominika

    2015-08-01

    This article deals with the design of effective numerical scheme for solving three point boundary value problems for second-order nonlinear singularly perturbed differential equations with initial conditions. Especially, it is focused on the analysis of the solutions when the point c from given interval is not the centre of this interval. The obtained system of nonlinear algebraic equations is solved by Newthon-Raphson method in MATLAB. It also verifies the convergence of approximate solutions of an original problem to the solution of reduced problem. We discuss the solution of a given problem with the situation when the point c is in the middle of the given interval.

  14. Optimal ABC inventory classification using interval programming

    NASA Astrophysics Data System (ADS)

    Rezaei, Jafar; Salimi, Negin

    2015-08-01

    Inventory classification is one of the most important activities in inventory management, whereby inventories are classified into three or more classes. Several inventory classifications have been proposed in the literature, almost all of which have two main shortcomings in common. That is, the previous methods mainly rely on an expert opinion to derive the importance of the classification criteria which results in subjective classification, and they need precise item parameters before implementing the classification. While the problem has been predominantly considered as a multi-criteria, we examine the problem from a different perspective, proposing a novel optimisation model for ABC inventory classification in the form of an interval programming problem. The proposed interval programming model has two important features compared to the existing methods: it provides optimal results instead of an expert-based classification and it does not require precise values of item parameters, which are not almost always available before classification. Finally, by illustrating the proposed classification model in the form of numerical example, conclusion and suggestions for future works are presented.

  15. Spectrally-Invariant Approximation Within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These "spectrally invariant relationships" are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in clouddominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction. and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with ID radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  16. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines

    NASA Astrophysics Data System (ADS)

    Gu, S.

    2016-08-01

    Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.

  17. Charged point defects in semiconductors and the supercell approximation

    NASA Astrophysics Data System (ADS)

    Lento, J.; Mozos, J.-L.; Nieminen, R. M.

    2002-03-01

    The effects of the supercell approximation in first-principles calculations for isolated, charged point defects in semiconductors and insulators are studied. The convergence of the Coulomb energy with respect to the supercell size is investigated. Quantitative numerical results for the standard uniform compensating charge and the newly proposed localized compensating charge scheme are presented for a prototypical defect, the doubly positive silicon self-interstitial.

  18. Random-phase approximation as a macroscopic description

    NASA Astrophysics Data System (ADS)

    Strutinsky, V. M.; Abrosimov, V. I.

    1990-09-01

    Analysis of nuclear processes in terms of cross-sections averaged over the many microscopic channels, as in the “poor resolution” experiments, corresponds to a macroscopic level of description. In this paper energy-averaged strength function is considered. In order to determine the frequency dependence of this quantity statistically averaged single-particle density is introduced for which equations are obtained analogous to random phase approximation.

  19. The Oberbeck-Boussinesq approximation as a constitutive limit

    NASA Astrophysics Data System (ADS)

    Kagei, Yoshiyuki; Růžička, Michael

    2015-12-01

    We derive the usual Oberbeck-Boussinesq approximation as a constitutive limit of the full system describing the motion of an compressible linearly viscous fluid. To this end, the starting system is written, using the Gibbs free energy, in the variables v, θ and p. The Oberbeck-Boussinesq system is then obtained as the thermal expansion coefficient α and the isothermal compressibility coefficient β tend to zero.

  20. Approximation of the Garrett-Munk internal wave spectrum

    NASA Astrophysics Data System (ADS)

    Ibragimov, Ranis N.; Vatchev, Vesselin

    2011-12-01

    The spectral models of Garrett and Munk (1972, 1975) continue to be a useful description of the oceanic energy spectrum. However, there are several ambiguities (many of them are summarized, for example, in Levine, 2002) that make it difficult to use e.g., in a dissipation modeling (e.g., Hibiya et al., 1996, and Winters and D'Asaro, 1997). An approximate spectral formulation is presented in this work by means of the modified Running Median Methods.

  1. The Oberbeck-Boussinesq approximation as a constitutive limit

    NASA Astrophysics Data System (ADS)

    Kagei, Yoshiyuki; Růžička, Michael

    2016-09-01

    We derive the usual Oberbeck-Boussinesq approximation as a constitutive limit of the full system describing the motion of an compressible linearly viscous fluid. To this end, the starting system is written, using the Gibbs free energy, in the variables v, θ and p. The Oberbeck-Boussinesq system is then obtained as the thermal expansion coefficient α and the isothermal compressibility coefficient β tend to zero.

  2. Exploring the Random Phase Approximately for materials chemistry and physics

    SciTech Connect

    Ruzsinsky, Adrienn

    2015-03-23

    This proposal focuses on improved accuracy for the delicate energy differences of interest in materials chemistry with the fully nonlocal random phase approximation (RPA) in a density functional context. Could RPA or RPA-like approaches become standard methods of first-principles electronic-structure calculation for atoms, molecules, solids, surfaces, and nano-structures? Direct RPA includes the full exact exchange energy and a nonlocal correlation energy from the occupied and unoccupied Kohn-Sham orbitals and orbital energies, with an approximate but universal description of long-range van der Waals attraction. RPA also improves upon simple pair-wise interaction potentials or vdW density functional theory. This improvement is essential to capture accurate energy differences in metals and different phases of semiconductors. The applications in this proposal are challenges for the simpler approximations of Kohn-Sham density functional theory, which are part of the current “standard model” for quantum chemistry and condensed matter physics. Within this project we already applied RPA on different structural phase transitions on semiconductors, metals and molecules. Although RPA predicts accurate structural parameters, RPA has proven not equally accurate in all kinds of structural phase transitions. Therefore a correction to RPA can be necessary in many cases. We are currently implementing and testing a nonempirical, spatially nonlocal, frequency-dependent model for the exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation context. This kernel predicts a nearly-exact correlation energy for the electron gas of uniform density. If RPA or RPA-like approaches prove to be reliably accurate, then expected increases in computer power may make them standard in the electronic-structure calculations of the future.

  3. Interval estimates for closure-phase and closure-amplitude imaging in radio astronomy

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Kosheleva, Olga; Finkel'shtejn, Andrej

    1992-01-01

    Interval estimates for closure-phase and closure-amplitude imaging that enable the reconstruction of a radioimage from results of approximate measurements are presented. If the intervals for the measured values are known, the precision of the result of the reconstruction cannot be solved by standard interval methods, because the phase value is based on a circle but not on a real line. If the phase theta (x bar) is measured with precision epsilon, so that the closure phase theta (x bar) + theta (y bar) - theta (x bar + y bar) is known with precision 3 epsilon, then from these measurements theta can be reconstructed with precision 6 epsilon. Similar estimates are given for closure amplitude.

  4. Slowly rotating scalar field wormholes: The second order approximation

    SciTech Connect

    Kashargin, P. E.; Sushkov, S. V.

    2008-09-15

    We discuss rotating wormholes in general relativity with a scalar field with negative kinetic energy. To solve the problem, we use the assumption about slow rotation. The role of a small dimensionless parameter plays the ratio of the linear velocity of rotation of the wormhole's throat and the velocity of light. We construct the rotating wormhole solution in the second-order approximation with respect to the small parameter. The analysis shows that the asymptotical mass of the rotating wormhole is greater than that of the nonrotating one, and the null energy condition violation in the rotating wormhole spacetime is weaker than that in the nonrotating one.

  5. Multigroup Free-atom Doppler-broadening Approximation. Experiment

    SciTech Connect

    Gray, Mark Girard

    2015-11-06

    The multigroup energy Doppler-broadening approximation agrees with continuous energy Dopplerbroadening generally to within ten percent for the total cross sections of 1H, 56Fe, and 235U at 250 lanl. Although this is probably not good enough for broadening from room temperature through the entire temperature range in production use, it is better than any interpolation scheme between temperatures proposed to date, and may be good enough for extrapolation from high temperatures. The method deserves further study since additional improvements are possible.

  6. Compton scattering from positronium and validity of the impulse approximation

    SciTech Connect

    Kaliman, Z.; Pisk, K.; Pratt, R. H.

    2011-05-15

    The cross sections for Compton scattering from positronium are calculated in the range from 1 to 100 keV incident photon energy. The calculations are based on the A{sup 2} term of the photon-electron or photon-positron interaction. Unlike in hydrogen, the scattering occurs from two centers and the interference effect plays an important role for energies below 8 keV. Because of the interference, the criterion for validity of the impulse approximation for positronium is more restrictive compared to that for hydrogen.

  7. Benchmarking mean-field approximations to level densities

    NASA Astrophysics Data System (ADS)

    Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Nakada, H.

    2016-04-01

    We assess the accuracy of finite-temperature mean-field theory using as a standard the Hamiltonian and model space of the shell model Monte Carlo calculations. Two examples are considered: the nucleus 162Dy, representing a heavy deformed nucleus, and 148Sm, representing a nearby heavy spherical nucleus with strong pairing correlations. The errors inherent in the finite-temperature Hartree-Fock and Hartree-Fock-Bogoliubov approximations are analyzed by comparing the entropies of the grand canonical and canonical ensembles, as well as the level density at the neutron resonance threshold, with shell model Monte Carlo calculations, which are accurate up to well-controlled statistical errors. The main weak points in the mean-field treatments are found to be: (i) the extraction of number-projected densities from the grand canonical ensembles, and (ii) the symmetry breaking by deformation or by the pairing condensate. In the absence of a pairing condensate, we confirm that the usual saddle-point approximation to extract the number-projected densities is not a significant source of error compared to other errors inherent to the mean-field theory. We also present an alternative formulation of the saddle-point approximation that makes direct use of an approximate particle-number projection and avoids computing the usual three-dimensional Jacobian of the saddle-point integration. We find that the pairing condensate is less amenable to approximate particle-number projection methods because of the explicit violation of particle-number conservation in the pairing condensate. Nevertheless, the Hartree-Fock-Bogoliubov theory is accurate to less than one unit of entropy for 148Sm at the neutron threshold energy, which is above the pairing phase transition. This result provides support for the commonly used "back-shift" approximation, treating pairing as only affecting the excitation energy scale. When the ground state is strongly deformed, the Hartree-Fock entropy is significantly

  8. Multiple-interval timing in rats: Performance on two-valued mixed fixed-interval schedules.

    PubMed

    Whitaker, S; Lowe, C F; Wearden, J H

    2003-10-01

    Three experiments studied timing in rats on 2-valued mixed-fixed-interval schedules, with equally probable components, Fixed-Interval S and Fixed-Interval L (FI S and FI L, respectively). When the L:S ratio was greater than 4, 2 distinct response peaks appeared close to FI S and FI L, and data could be well fitted by the sum of 2 Gaussian curves. When the L:S ratio was less than 4, only 1 response peak was usually visible, but nonlinear regression often identified separate sources of behavioral control, by FI S and FI L, although control by FI L dominated. Data were used to test ideas derived from scalar expectancy theory, the behavioral theory of timing, and learning to time.

  9. Length of the current interglacial period and interglacial intervals of the last million years

    NASA Astrophysics Data System (ADS)

    Dergachev, V. A.

    2015-12-01

    It was ascertained that the long-term cyclical oscillations of the global climate of the Earth between glacial and interglacial states for the last million years respond to cyclical oscillations of the orbital parameters of the Earth. Cold glacial states with a period of approximately 100 ka give way to shorter intervals of warming of around 10-12 ka long. The current interglacial period—the so-called Holocene—started on Earth roughly 10 ka ago. The length of the current interglacial period and the causes of the climate change over the last approximately 50 years arouse sharp debates connected with the growing anthropogenic emission of greenhouse gases. To estimate the length of the current interglacial period, interglacial intervals near ~400 (MIS-11) and ~800 (MIS-19) ka are analyzed as its probable analogs.

  10. Improved Power Saving Mechanism to Increase Unavailability Interval in IEEE 802.16e Networks

    NASA Astrophysics Data System (ADS)

    Lee, Kyunghye; Mun, Youngsong

    To manage limited energy resources efficiently, IEEE 802.16e specifies sleep mode operation. Since there can be no communication between the mobile station (MS) and the serving base station (BS) during the unavailability interval, the MS can power down its physical operation components. We propose an improved power saving mechanism (iPSM) which effectively increases the unavailability interval of Type I and Type II power saving classes (PSCs) activated in an MS. After investigating the number of frames in the unavailability interval of each Type II PSC when used with Type I PSC, the iPSM chooses the Type II PSC that yields the maximum number of frames in the unavailability interval. Performance evaluation confirms that the proposed scheme is very effective.

  11. The prelaying interval of emperor geese on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Hupp, J.W.; Schmutz, J.A.; Ely, C.R.

    2006-01-01

    We marked 136 female Emperor Geese (Chen canagica) in western Alaska with VHF or satellite (PTT) transmitters from 1999 to 2003 to monitor their spring arrival and nest initiation dates on the Yukon Delta, and to estimate prelaying interval lengths once at the nesting area. Ninety-two females with functional transmitters returned to the Yukon Delta in the spring after they were marked, and we located the nests of 35 of these individuals. Prelaying intervals were influenced by when snow melted in the spring and individual arrival dates on the Yukon Delta. The median prelaying interval was 15 days (range = 12-19 days) in a year when snow melted relatively late, and 11 days (range = 4-16 days) in two warmer years when snow melted earlier. In years when snow melted earlier, prelaying intervals of <12 days for 11 of 15 females suggested they initiated rapid follicle development on spring staging areas. The prelaying interval declined by approximately 0.4 days and nest initiation date increased approximately 0.5 days for each day a female delayed her arrival. Thus, females that arrived first on the Yukon Delta had prelaying intervals up to four days longer, yet they nested up to five days earlier, than females that arrived last. The proximity of spring staging areas on the Alaska Peninsula to nesting areas on the Yukon Delta may enable Emperor Geese to alter timing of follicle development depending on annual conditions, and to invest nutrients acquired from both areas in eggs during their formation. Plasticity in timing of follicle development is likely advantageous in a variable environment where melting of snow cover in the spring can vary by 2-3 weeks annually. ?? The Cooper Ornithological Society 2006.

  12. Physiological Responses to High-Intensity Interval Exercise Differing in Interval Duration.

    PubMed

    Tucker, Wesley J; Sawyer, Brandon J; Jarrett, Catherine L; Bhammar, Dharini M; Gaesser, Glenn A

    2015-12-01

    We determined the oxygen uptake (V[Combining Dot Above]O2), heart rate (HR), and blood lactate responses to 2 high-intensity interval exercise protocols differing in interval length. On separate days, 14 recreationally active males performed a 4 × 4 (four 4-minute intervals at 90-95% HRpeak, separated by 3-minute recovery at 50 W) and 16 × 1 (sixteen 1-minute intervals at 90-95% HRpeak, separated by 1-minute recovery at 50 W) protocol on a cycle ergometer. The 4 × 4 elicited a higher mean V[Combining Dot Above]O2 (2.44 ± 0.4 vs. 2.36 ± 0.4 L·min) and "peak" V[Combining Dot Above]O2 (90-99% vs. 76-85% V[Combining Dot Above]O2peak) and HR (95-98% HRpeak vs. 81-95% HRpeak) during the high-intensity intervals. Average power maintained was higher for the 16 × 1 (241 ± 45 vs. 204 ± 37 W), and recovery interval V[Combining Dot Above]O2 and HR were higher during the 16 × 1. No differences were observed for blood lactate concentrations at the midpoint (12.1 ± 2.2 vs. 10.8 ± 3.1 mmol·L) and end (10.6 ± 1.5 vs. 10.6 ± 2.4 mmol·L) of the protocols or ratings of perceived exertion (7.0 ± 1.6 vs. 7.0 ± 1.4) and Physical Activity Enjoyment Scale scores (91 ± 15 vs. 93 ± 12). Despite a 4-fold difference in interval duration that produced greater between-interval transitions in V[Combining Dot Above]O2 and HR and slightly higher mean V[Combining Dot Above]O2 during the 4 × 4, mean HR during each protocol was the same, and both protocols were rated similarly for perceived exertion and enjoyment. The major difference was that power output had to be reduced during the 4 × 4 protocol to maintain the desired HR.

  13. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  14. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  15. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the

  16. Analyzing the errors of DFT approximations for compressed water systems.

    PubMed

    Alfè, D; Bartók, A P; Csányi, G; Gillan, M J

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm(3) where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE(h) ≃ 15 meV/monomer for the liquid and the

  17. A primer on confidence intervals in psychopharmacology.

    PubMed

    Andrade, Chittaranjan

    2015-02-01

    Research papers and research summaries frequently present results in the form of data accompanied by 95% confidence intervals (CIs). Not all students and clinicians know how to interpret CIs. This article provides a nontechnical, nonmathematical discussion on how to understand and glean information from CIs; all explanations are accompanied by simple examples. A statistically accurate explanation about CIs is also provided. CIs are differentiated from standard deviations, standard errors, and confidence levels. The interpretation of narrow and wide CIs is discussed. Factors that influence the width of a CI are listed. Explanations are provided for how CIs can be used to assess statistical significance. The significance of overlapping and nonoverlapping CIs is considered. It is concluded that CIs are far more informative than, say, mere P values when drawing conclusions about a result.

  18. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  19. Approximate formulas for moderately small eikonal amplitudes

    NASA Astrophysics Data System (ADS)

    Kisselev, A. V.

    2016-08-01

    We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.

  20. ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION

    SciTech Connect

    A. EZHOV; A. KHROMOV; G. BERMAN

    2001-05-01

    We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.