Science.gov

Sample records for energy interval approximation

  1. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  2. Approximating Confidence Intervals for Factor Loadings.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1991-01-01

    A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…

  3. A Comparison of Approximate Interval Estimators for the Bernoulli Parameter

    DTIC Science & Technology

    1993-12-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate...is appropriate for certain sample sizes and point estimators. Confidence interval , Binomial distribution, Bernoulli distribution, Poisson distribution.

  4. Convergence of the natural approximations of piecewise monotone interval maps.

    PubMed

    Haydn, Nicolai

    2004-06-01

    We consider piecewise monotone interval mappings which are topologically mixing and satisfy the Markov property. It has previously been shown that the invariant densities of the natural approximations converge exponentially fast in uniform pointwise topology to the invariant density of the given map provided its derivative is piecewise Lipshitz continuous. We provide an example of a map which is Lipshitz continuous and for which the densities converge in the bounded variation norm at a logarithmic rate. This shows that in general one cannot expect exponential convergence in the bounded variation norm. Here we prove that if the derivative of the interval map is Holder continuous and its variation is well approximable (gamma-uniform variation for gamma>0), then the densities converge exponentially fast in the norm.

  5. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  6. Confidence and coverage for Bland-Altman limits of agreement and their approximate confidence intervals.

    PubMed

    Carkeet, Andrew; Goh, Yee Teng

    2016-09-01

    Bland and Altman described approximate methods in 1986 and 1999 for calculating confidence limits for their 95% limits of agreement, approximations which assume large subject numbers. In this paper, these approximations are compared with exact confidence intervals calculated using two-sided tolerance intervals for a normal distribution. The approximations are compared in terms of the tolerance factors themselves but also in terms of the exact confidence limits and the exact limits of agreement coverage corresponding to the approximate confidence interval methods. Using similar methods the 50th percentile of the tolerance interval are compared with the k values of 1.96 and 2, which Bland and Altman used to define limits of agreements (i.e. [Formula: see text]+/- 1.96Sd and [Formula: see text]+/- 2Sd). For limits of agreement outer confidence intervals, Bland and Altman's approximations are too permissive for sample sizes <40 (1999 approximation) and <76 (1986 approximation). For inner confidence limits the approximations are poorer, being permissive for sample sizes of <490 (1986 approximation) and all practical sample sizes (1999 approximation). Exact confidence intervals for 95% limits of agreements, based on two-sided tolerance factors, can be calculated easily based on tables and should be used in preference to the approximate methods, especially for small sample sizes.

  7. Energy conservation - A test for scattering approximations

    NASA Technical Reports Server (NTRS)

    Acquista, C.; Holland, A. C.

    1980-01-01

    The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

  8. A Computer Simulation Analysis of a Suggested Approximate Confidence Interval for System Maintainability.

    DTIC Science & Technology

    The paper presents an accuracy analysis of a suggested approximate confidence interval for system maintainability parameters. Technically, the...using the method of moments. The simulation has application to the classical confidence interval for mean time to repair of a series system, under the

  9. A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators

    PubMed Central

    Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong

    2014-01-01

    Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065

  10. Approximate representations of random intervals for hybrid uncertainty quantification in engineering modeling

    SciTech Connect

    Joslyn, C.

    2004-01-01

    We review our approach to the representation and propagation of hybrid uncertainties through high-complexity models, based on quantities known as random intervals. These structures have a variety of mathematical descriptions, for example as interval-valued random variables, statistical collections of intervals, or Dempster-Shafer bodies of evidence on the Borel field. But methods which provide simpler, albeit approximate, representations of random intervals are highly desirable, including p-boxes and traces. Each random interval, through its cumulative belief and plausibility measures functions, generates a unique p-box whose constituent CDFs are all of those consistent with the random interval. In turn, each p-box generates an equivalence class of random intervals consistent with it. Then, each p-box necessarily generates a unique trace which stands as the fuzzy set representation of the p-box or random interval. In turn each trace generates an equivalence class of p-boxes. The heart of our approach is to try to understand the tradeoffs between error and simplicity introduced when p-boxes or traces are used to stand in for various random interval operations. For example, Joslyn has argued that for elicitation and representation tasks, traces can be the most appropriate structure, and has proposed a method for the generation of canonical random intervals from elicited traces. But alternatively, models built as algebraic equations of uncertainty-valued variables (in our case, random-interval-valued) propagate uncertainty through convolution operations on basic algebraic expressions, and while convolution operations are defined on all three structures, we have observed that the results of only some of these operations are preserved as one moves through these three levels of specificity. We report on the status and progress of this modeling approach concerning the relations between these mathematical structures within this overall framework.

  11. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  12. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  13. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  14. Approximate Interval Estimation Methods for the Reliability of Systems Using Discrete Component Data

    DTIC Science & Technology

    1990-09-01

    Three lower confidence interval estimation procedures for system reliability of coherent systems with cyclic components are developed and their...components. The combined procedure may yield a reasonably accurate lower confidence interval procedure for the reliability of coherent systems with mixtures of continuous and cyclic components.

  15. Energy flow: image correspondence approximation for motion analysis

    NASA Astrophysics Data System (ADS)

    Wang, Liangliang; Li, Ruifeng; Fang, Yajun

    2016-04-01

    We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

  16. Correlation Energies from the Two-Component Random Phase Approximation.

    PubMed

    Kühn, Michael

    2014-02-11

    The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.

  17. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    PubMed

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions.

  18. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  19. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  20. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

    PubMed

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested.

  1. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  2. Energy equipartitioning in the classical time-dependent Hartree approximation

    NASA Astrophysics Data System (ADS)

    Straub, John E.; Karplus, Martin

    1991-05-01

    In the classical time-dependent Hartree approximation (TDH), the dynamics of a single molecule is approximated by that of a ``field'' (each field being N ``copies'' of the molecule which are transparent to one another while interacting with the system via a scaled force). It is shown that when some molecules are represented by a field of copies, while other molecules are represented normally, the average kinetic energy of the system increases linearly with the number of copies and diverges in the limit of large N. Nevertheless, the TDH method with appropriate energy scaling can serve as a useful means of enhancing the configurational sampling for problems involving coupled systems with disparate numbers of degrees of freedom.

  3. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  4. Flux tube spectra from approximate integrability at low energies

    NASA Astrophysics Data System (ADS)

    Dubovsky, S.; Flauger, R.; Gorbenko, V.

    2015-03-01

    We provide a detailed introduction to a method we recently proposed for calculating the spectrum of excitations of effective strings such as QCD flux tubes. The method relies on the approximate integrability of the low-energy effective theory describing the flux tube excitations and is based on the thermodynamic Bethe ansatz. The approximate integrability is a consequence of the Lorentz symmetry of QCD. For excited states, the convergence of the thermodynamic Bethe ansatz technique is significantly better than that of the traditional perturbative approach. We apply the new technique to the lattice spectra for fundamental flux tubes in gluodynamics in D = 3 + 1 and D = 2 + 1, and to k-strings in gluodynamics in D = 2 + 1. We identify a massive pseudoscalar resonance on the worldsheet of the confining strings in SU(3) gluodynamics in D = 3 + 1, and massive scalar resonances on the worldsheet of k = 2.3 strings in SU(6) gluodynamics in D = 2 + 1.

  5. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

    ERIC Educational Resources Information Center

    Viechtbauer, Wolfgang

    2007-01-01

    Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

  6. Approximate theory the electromagnetic energy of solenoid in special relativity

    NASA Astrophysics Data System (ADS)

    Prastyaningrum, I.; Kartikaningsih, S.

    2017-01-01

    Solenoid is a device that is often used in electronic devices. A solenoid is electrified will cause a magnetic field. In our analysis, we just focus on the electromagnetic energy for solenoid form. We purpose to analyze by the theoretical approach in special relativity. Our approach is begun on the Biot Savart law and Lorentz force. Special theory relativity can be derived from the Biot Savart law, and for the energy can be derived from Lorentz for, by first determining the momentum equation. We choose the solenoid form with the goal of the future can be used to improve the efficiency of the electrical motor.

  7. Long GRB with Additional High Energy Maxima after the End of the Low Energy T90 Intervals

    NASA Astrophysics Data System (ADS)

    Irene, Arkhangelskaja; Alexander, Zenin; Dmitry, Kirin; Elena, Voevodina

    2013-01-01

    Now GRB high energy γ-emission was observed mostly by detectors onboard Fermi and Agile satellites. During most part of GRB high energy γ-emission registered some later than low energy trigger and lasts several hundreds of seconds, but its maxima are within low energy t90 intervals both for short and long bursts. But GRB090323, GRB090328 and GRB090626 temporal profiles have additional maxima after low energy t90 intervals finished. These bursts temporal profile analysis have shown that faint peaks in low energy bands close to the ends of low energy t90 intervals preceded such maxima. Moreover, these events low energy spectral index β behavior differs from usual GRB one according to preliminary analysis. We suppose that these GRB could be separated as different GRB type. In presented article this new GRB type properties are discussed.

  8. Excitation energies from extended random phase approximation employed with approximate one- and two-electron reduced density matrices

    NASA Astrophysics Data System (ADS)

    Chatterjee, Koushik; Pernal, Katarzyna

    2012-11-01

    Starting from Rowe's equation of motion we derive extended random phase approximation (ERPA) equations for excitation energies. The ERPA matrix elements are expressed in terms of the correlated ground state one- and two-electron reduced density matrices, 1- and 2-RDM, respectively. Three ways of obtaining approximate 2-RDM are considered: linearization of the ERPA equations, obtaining 2-RDM from density matrix functionals, and employing 2-RDM corresponding to an antisymmetrized product of strongly orthogonal geminals (APSG) ansatz. Applying the ERPA equations with the exact 2-RDM to a hydrogen molecule reveals that the resulting ^1Σ _g^+ excitation energies are not exact. A correction to the ERPA excitation operator involving some double excitations is proposed leading to the ERPA2 approach, which employs the APSG one- and two-electron reduced density matrices. For two-electron systems ERPA2 satisfies a consistency condition and yields exact singlet excitations. It is shown that 2-RDM corresponding to the APSG theory employed in the ERPA2 equations yields excellent singlet excitation energies for Be and LiH systems, and for the N2 molecule the quality of the potential energy curves is at the coupled cluster singles and doubles level. ERPA2 nearly satisfies the consistency condition for small molecules that partially explains its good performance.

  9. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

  10. Measurement of the lithium 10p fine structure interval and absolute energy

    SciTech Connect

    Oxley, Paul; Collins, Patrick

    2010-02-15

    We report a measurement of the fine structure interval of the {sup 7}Li 10p atomic state with a precision significantly better than previous measurements of fine structure intervals of Rydberg {sup 7}Li p states. Our result of 74.97(74) MHz provides an experimental value for the only n=10 fine structure interval which is yet to be calculated. We also report a measurement of the absolute energy of the 10p state and its quantum defect, which are, respectively, 42379.498(23)cm{sup -1} and 0.04694(10). These results are in good agreement with recent calculations.

  11. Density-functional correction of random-phase-approximation correlation with results for jellium surface energies

    NASA Astrophysics Data System (ADS)

    Kurth, Stefan; Perdew, John P.

    1999-04-01

    Since long-range electron-electron correlation is treated properly in the random phase approximation (RPA), we define short-range correlation as the correction to the RPA. The effects of short-range correlation are investigated here in the local spin density (LSD) approximation and the generalized gradient approximation (GGA). Results are presented for atoms, molecules, and jellium surfaces. It is found that (1) short-range correlation energies are less sensitive to the inclusion of density gradients than are full correlation energies, and (2) short-range correlation makes a surprisingly small contribution to surface and molecular atomization energies. In order to improve the accuracy of electronic-structure calculations, we therefore combine a GGA treatment of short-range correlation with a full RPA treatment of the exchange-correlation energy. This approach leads to jellium surface energies close to those of the LSD approximation for exchange and correlation together (but not for each separately).

  12. The performance of density functional approximations for the structures and relative energies of minimum energy crossing points

    NASA Astrophysics Data System (ADS)

    Abate, Bayileyegn A.; Peralta, Juan E.

    2013-12-01

    The structural parameters and relative energies of the minimum-energy crossing points (MECPs) of eight small molecules are calculated using five different representative density functional theory approximations as well as MP2, MP4, and CCSD(T) as a reference. Compared to high-level wavefunction methods, the main structural features of the MECPs of the systems included in this Letter are reproduced reasonably well by density functional approximations, in agreement with previous works. Our results show that when high-level wavefunction methods are computationally prohibitive, density functional approximations offer a good alternative for locating and characterizing the MECP in spin-forbidden chemical reactions.

  13. Multiscale cross-approximate entropy analysis as a measurement of complexity between ECG R-R interval and PPG pulse amplitude series among the normal and diabetic subjects.

    PubMed

    Wu, Hsien-Tsai; Lee, Chih-Yuan; Liu, Cyuan-Cin; Liu, An-Bang

    2013-01-01

    Physiological signals often show complex fluctuation (CF) under the dual influence of temporal and spatial scales, and CF can be used to assess the health of physiologic systems in the human body. This study applied multiscale cross-approximate entropy (MC-ApEn) to quantify the complex fluctuation between R-R intervals series and photoplethysmography amplitude series. All subjects were then divided into the following two groups: healthy upper middle-aged subjects (Group 1, age range: 41-80 years, n = 27) and upper middle-aged subjects with type 2 diabetes (Group 2, age range: 41-80 years, n = 24). There are significant differences of heart rate variability, LHR, between Groups 1 and 2 (1.94 ± 1.21 versus 1.32 ± 1.00, P = 0.031). Results demonstrated differences in sum of large scale MC-ApEn (MC-ApEn(LS)) (5.32 ± 0.50 versus 4.74 ± 0.78, P = 0.003). This parameter has a good agreement with pulse-pulse interval and pulse amplitude ratio (PAR), a simplified assessment for baroreflex activity. In conclusion, this study employed the MC-ApEn method, integrating multiple temporal and spatial scales, to quantify the complex interaction between the two physical signals. The MC-ApEn(LS) parameter could accurately reflect disease process in diabetics and might be another way for assessing the autonomic nerve function.

  14. Ermod: fast and versatile computation software for solvation free energy with approximate theory of solutions.

    PubMed

    Sakuraba, Shun; Matubayasi, Nobuyuki

    2014-08-05

    ERmod is a software package to efficiently and approximately compute the solvation free energy using the method of energy representation. Molecular simulation is to be conducted at two condensed-phase systems of the solution of interest and the reference solvent with test-particle insertion of the solute. The subprogram ermod in ERmod then provides a set of energy distribution functions from the simulation trajectories, and another subprogram slvfe determines the solvation free energy from the distribution functions through an approximate functional. This article describes the design and implementation of ERmod, and illustrates its performance in solvent water for two organic solutes and two protein solutes. Actually, the free-energy computation with ERmod is not restricted to the solvation in homogeneous medium such as fluid and polymer and can treat the binding into weakly ordered system with nano-inhomogeneity such as micelle and lipid membrane. ERmod is available on web at http://sourceforge.net/projects/ermod.

  15. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    SciTech Connect

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  16. Magnetotail energy storage and release during the CDAW 6 substorm analysis intervals

    NASA Technical Reports Server (NTRS)

    Baker, D. N.; Fritz, T. A.; Mcpherron, R. L.; Fairfield, D. H.; Kamide, Y.; Baumjohann, W.

    1985-01-01

    The concept of the Coordinated Data Analysis Workshop (CDAW) grew out of the International Magnetospheric Study (IMS) program. According to this concept, data are to be pooled from a wide variety of spacecraft and ground-based sources for limited time intervals. These data are to provide the basis for the performance of very detailed correlative analyses, usually with fairly limited physical problems in mind. However, in the case of the CDAW 6 truly global goals are involved. The primary goal is to trace the flow of energy from the solar wind through the magnetosphere to its ultimate dissipation by substorm processes. The present investigation has the specific goal to examine the evidence for the storage of solar wind energy in the magnetotail prior to substorm expansion phase onsets. Of particular interest is the determination, in individual substorm cases, of the time delays between the loading of energy into the magnetospheric system and the subsequent unloading of this energy.

  17. Expeditious Stochastic Calculation of Random-Phase Approximation Energies for Thousands of Electrons in Three Dimensions.

    PubMed

    Neuhauser, Daniel; Rabani, Eran; Baer, Roi

    2013-04-04

    A fast method is developed for calculating the random phase approximation (RPA) correlation energy for density functional theory. The correlation energy is given by a trace over a projected RPA response matrix, and the trace is taken by a stochastic approach using random perturbation vectors. For a fixed statistical error in the total energy per electron, the method scales, at most, quadratically with the system size; however, in practice, due to self-averaging, it requires less statistical sampling as the system grows, and the performance is close to linear scaling. We demonstrate the method by calculating the RPA correlation energy for cadmium selenide and silicon nanocrystals with over 1500 electrons. We find that the RPA correlation energies per electron are largely independent of the nanocrystal size. In addition, we show that a correlated sampling technique enables calculation of the energy difference between two slightly distorted configurations with scaling and a statistical error similar to that of the total energy per electron.

  18. Second-order approximation for heat conduction: dissipation principle and free energies.

    PubMed

    Amendola, Giovambattista; Fabrizio, Mauro; Golden, Murrough; Lazzari, Barbara

    2016-02-01

    In the context of new models of heat conduction, the second-order approximation of Tzou's theory, derived by Quintanilla and Racke, has been studied recently by two of the present authors, where it was proved equivalent to a fading memory material. The importance of determining free energy functionals for such materials, and indeed for any material with memory, is emphasized. Because the kernel does not satisfy certain convexity restrictions that allow us to obtain various traditional free energies for materials with fading memory, it is necessary to restrict the study to the minimum and related free energies, which do not require these restrictions. Thus, the major part of this work is devoted to deriving an explicit expression for the minimum free energy. Simple modifications of this expression also give an intermediate free energy and the maximum free energy for the material. These derivations differ in certain important respects from earlier work on such free energies.

  19. Second-order approximation for heat conduction: dissipation principle and free energies

    PubMed Central

    Amendola, Giovambattista; Golden, Murrough

    2016-01-01

    In the context of new models of heat conduction, the second-order approximation of Tzou's theory, derived by Quintanilla and Racke, has been studied recently by two of the present authors, where it was proved equivalent to a fading memory material. The importance of determining free energy functionals for such materials, and indeed for any material with memory, is emphasized. Because the kernel does not satisfy certain convexity restrictions that allow us to obtain various traditional free energies for materials with fading memory, it is necessary to restrict the study to the minimum and related free energies, which do not require these restrictions. Thus, the major part of this work is devoted to deriving an explicit expression for the minimum free energy. Simple modifications of this expression also give an intermediate free energy and the maximum free energy for the material. These derivations differ in certain important respects from earlier work on such free energies. PMID:27118896

  20. Approximate method of free energy calculation for spin system with arbitrary connection matrix

    NASA Astrophysics Data System (ADS)

    Kryzhanovsky, Boris; Litinskii, Leonid

    2015-01-01

    The proposed method of the free energy calculation is based on the approximation of the energy distribution in the microcanonical ensemble by the Gaussian distribution. We hope that our approach will be effective for the systems with long-range interaction, where large coordination number q ensures the correctness of the central limit theorem application. However, the method provides good results also for systems with short-range interaction when the number q is not so large.

  1. Dielectric Matrix Formulation of Correlation Energies in the Random Phase Approximation: Inclusion of Exchange Effects.

    PubMed

    Mussard, Bastien; Rocca, Dario; Jansen, Georg; Ángyán, János G

    2016-05-10

    Starting from the general expression for the ground state correlation energy in the adiabatic-connection fluctuation-dissipation theorem (ACFDT) framework, it is shown that the dielectric matrix formulation, which is usually applied to calculate the direct random phase approximation (dRPA) correlation energy, can be used for alternative RPA expressions including exchange effects. Within this famework, the ACFDT analog of the second order screened exchange (SOSEX) approximation leads to a logarithmic formula for the correlation energy similar to the direct RPA expression. Alternatively, the contribution of the exchange can be included in the kernel used to evaluate the response functions. In this case, the use of an approximate kernel is crucial to simplify the formalism and to obtain a correlation energy in logarithmic form. Technical details of the implementation of these methods are discussed, and it is shown that one can take advantage of density fitting or Cholesky decomposition techniques to improve the computational efficiency; a discussion on the numerical quadrature made on the frequency variable is also provided. A series of test calculations on atomic correlation energies and molecular reaction energies shows that exchange effects are instrumental for improvement over direct RPA results.

  2. Energy Stable Space-Time Discontinuous Galerkin Approximations of the 2-Fluid Plasma Equations

    NASA Astrophysics Data System (ADS)

    Rossmanith, James; Barth, Tim

    2010-11-01

    Energy stable variants of the space-time discontinuous Galerkin (DG) finite element method are developed that approximate the ideal two-fluid plasma equations. Using standard symmetrization techniques, the two-fluid plasma equations are symmeterized via convex entropy function and the introduction of entropy variables. Using these entropy variables, the source term coupling in the two-fluid plasma equations is shown to have iso-energetic properties so that the source term neither creates nor removes energy from the system. Finite-dimensional approximation spaces utilizing entropy variables are utilized in the DG discretization yielding provable nonlinear stability and exact preservation of this iso-energetic source term property. Numerical results for the two-fluid approximation of magnetic reconnection are presented verifying and assessing properties of the present method.

  3. Infinite order sudden approximation for rotational energy transfer in gaseous mixtures

    NASA Technical Reports Server (NTRS)

    Goldflam, R.; Kouri, D. J.; Green, S.

    1977-01-01

    Rotational energy transfer in gaseous mixtures is analyzed within the framework of the infinite order sudden (IOS) approximation, and a new derivation of the IOS from the coupled states Lippman-Schwinger equation is presented. This approach shows the relation between the IOS and coupled state T matrices. The general IOS effective cross section can be factored into a finite sum of 'spectroscopic coefficients' and 'dynamical coefficients'. The evaluation of these coefficients is considered. Pressure broadening for the systems HD-He, HCl-He, CO-He, HCl-Ar, and CO2-Ar is calculated, and results based on the IOS approximation are compared with coupled state results. The IOS approximation is found to be very accurate whenever the rotor spacings are small compared to the kinetic energy, provided closed channels do not play too great a role.

  4. Interval Data Analysis with the Energy Charting and Metrics Tool (ECAM)

    SciTech Connect

    Taasevigen, Danny J.; Katipamula, Srinivas; Koran, William

    2011-07-07

    Analyzing whole building interval data is an inexpensive but effective way to identify and improve building operations, and ultimately save money. Utilizing the Energy Charting and Metrics Tool (ECAM) add-in for Microsoft Excel, building operators and managers can begin implementing changes to their Building Automation System (BAS) after trending the interval data. The two data components needed for full analyses are whole building electricity consumption (kW or kWh) and outdoor air temperature (OAT). Using these two pieces of information, a series of plots and charts and be created in ECAM to monitor the buildings performance over time, gain knowledge of how the building is operating, and make adjustments to the BAS to improve efficiency and start saving money.

  5. Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation.

    PubMed

    van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-05-14

    Despite their unmatched success for many applications, commonly used local, semi-local, and hybrid density functionals still face challenges when it comes to describing long-range interactions, static correlation, and electron delocalization. Density functionals of both the occupied and virtual orbitals are able to address these problems. The particle-hole (ph-) Random Phase Approximation (RPA), a functional of occupied and virtual orbitals, has recently known a revival within the density functional theory community. Following up on an idea introduced in our recent communication [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)], we formulate more general adiabatic connections for the correlation energy in terms of pairing matrix fluctuations described by the particle-particle (pp-) propagator. With numerical examples of the pp-RPA, the lowest-order approximation to the pp-propagator, we illustrate the potential of density functional approximations based on pairing matrix fluctuations. The pp-RPA is size-extensive, self-interaction free, fully anti-symmetric, describes the strong static correlation limit in H2, and eliminates delocalization errors in H2(+) and other single-bond systems. It gives surprisingly good non-bonded interaction energies--competitive with the ph-RPA--with the correct R(-6) asymptotic decay as a function of the separation R, which we argue is mainly attributable to its correct second-order energy term. While the pp-RPA tends to underestimate absolute correlation energies, it gives good relative energies: much better atomization energies than the ph-RPA, as it has no tendency to underbind, and reaction energies of similar quality. The adiabatic connection in terms of pairing matrix fluctuation paves the way for promising new density functional approximations.

  6. Comparison of overlap-based models for approximating the exchange-repulsion energy.

    PubMed

    Söderhjelm, Pär; Karlström, Gunnar; Ryde, Ulf

    2006-06-28

    Different ways of approximating the exchange-repulsion energy with a classical potential function have been investigated by fitting various expressions to the exact exchange-repulsion energy for a large set of molecular dimers. The expressions involve either the orbital overlap or the electron-density overlap. For comparison, the parameter-free exchange-repulsion model of the effective fragment potential (EFP) is also evaluated. The results show that exchange-repulsion energy is nearly proportional to both the orbital overlap and the density overlap. For accurate results, a distance-dependent correction is needed in both cases. If few parameters are desired, orbital overlap is superior to density overlap, but the fit to density overlap can be significantly improved by introducing more parameters. The EFP performs well, except for delocalized pi systems. However, an overlap expression with a few parameters seems to be slightly more accurate and considerably easier to approximate.

  7. Correlation matrix renormalization approximation for total energy calculations of correlated electron systems

    NASA Astrophysics Data System (ADS)

    Yao, Y. X.; Liu, C.; Liu, J.; Lu, W. C.; Wang, C. Z.; Ho, K. M.

    2013-03-01

    The recently introduced correlation matrix renormalization approximation (CMRA) was further developed by adopting a completely factorizable form for the renormalization z-factors, which assumes the validity of the Wick's theorem with respect to Gutzwiller wave function. This approximation (CMR-II) shows better dissociation behavior than the original one (CMR-I) based on the straightforward generalization of the Gutzwiller approximation to two-body interactions. We further improved the performance of CMRA by redefining the z-factors as a function of f(z) in CMR-II, which we call CMR-III. We obtained an analytical expression of f(z) by enforcing the equality in energy functional between CMR-III and full configuration interaction for the benchmark minimal basis H2. We show that CMR-III yields quite good binding energies and dissociation behaviors for various hydrogen clusters with converged basis set. Finally, we apply CMR-III to hydrogen crystal phases and compare the results with quantum Monte Carlo. Research supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames Laboratory is operated for the U.S. DOE by Iowa State University under Contract No. DE-AC02-07CH11358.

  8. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  9. Consolidation of hydrophobic transition criteria by using an approximate energy minimization approach.

    PubMed

    Patankar, Neelesh A

    2010-06-01

    Recent experimental work has successfully revealed pressure induced transition from Cassie to Wenzel state on rough hydrophobic substrates. Formulas, based on geometric considerations and imposed pressure, have been developed as transition criteria. In the past, transition has also been considered as a process of overcoming the energy barrier between the Cassie and Wenzel states. A unified understanding of the various considerations of transition has not been apparent. To address this issue, in this work, we consolidate the transition criteria with a homogenized energy minimization approach. This approach decouples the problem of minimizing the energy to wet the rough substrate, from the energy of the macroscopic drop. It is seen that the transition from Cassie to Wenzel state, due to depinning of the liquid-air interface, emerges from the approximate energy minimization approach if the pressure-volume energy associated with the impaled liquid in the roughness is included. This transition can be viewed as a process in which the work done by the pressure force is greater than the barrier due to the surface energy associated with wetting the roughness. It is argued that another transition mechanism, due to a sagging liquid-air interface that touches the bottom of the roughness grooves, is not typically relevant if the substrate roughness is designed such that the Cassie state is at lower energy compared to the Wenzel state.

  10. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.

    PubMed

    Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen

    2016-02-01

    The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.

  11. Performance and energy systems contributions during upper-body sprint interval exercise

    PubMed Central

    Franchini, Emerson; Takito, Monica Yuri; Dal’Molin Kiss, Maria Augusta Peduti

    2016-01-01

    The main purpose of this study was to investigate the performance and energy systems contribution during four upper-body Wingate tests interspersed by 3-min intervals. Fourteen well-trained male adult Judo athletes voluntarily took part in the present study. These athletes were from state to national level, were in their competitive period, but not engaged in any weight loss procedure. Energy systems contributions were estimated using oxygen uptake and blood lactate measurements. The main results indicated that there was higher glycolytic contribution compared to oxidative (P<0.001) during bout 1, but lower glycolytic contribution was observed compared to the phosphagen system (adenosine triphosphate-creatine phosphate, ATP-PCr) contribution during bout 3 (P<0.001), lower glycolytic contribution compared to oxidative and ATP-PCr (P<0.001 for both comparisons) contributions during bout 4 and lower oxidative compared to ATP-PCr during bout 4 (P=0.040). For the energy system contribution across Wingate bouts, the ATP-PCr contribution during bout 1 was lower than that observed during bout 4 (P=0.005), and the glycolytic system presented higher percentage contribution in the first bout compared to the third and fourth bouts (P<0.001 for both comparisons), and higher percentage participation in the second compared to the fourth bout (P<0.001). These results suggest that absolute oxidative and ATP-PCr participations were kept constant across Wingate tests, but there was an increase in relative participation of ATP-PCr in bout 4 compared to bout 1, probably due to the partial phosphocreatine resynthesis during intervals and to the decreased glycolytic activity. PMID:28119874

  12. Impact of nonlocal correlations over different energy scales: A dynamical vertex approximation study

    NASA Astrophysics Data System (ADS)

    Rohringer, G.; Toschi, A.

    2016-09-01

    In this paper, we investigate how nonlocal correlations affect, selectively, the physics of correlated electrons over different energy scales, from the Fermi level to the band edges. This goal is achieved by applying a diagrammatic extension of dynamical mean field theory (DMFT), the dynamical vertex approximation (D Γ A ), to study several spectral and thermodynamic properties of the unfrustrated Hubbard model in two and three dimensions. Specifically, we focus first on the low-energy regime by computing the electronic scattering rate and the quasiparticle mass renormalization for decreasing temperatures at a fixed interaction strength. This way, we obtain a precise characterization of the several steps through which the Fermi-liquid physics is progressively destroyed by nonlocal correlations. Our study is then extended to a broader energy range, by analyzing the temperature behavior of the kinetic and potential energy, as well as of the corresponding energy distribution functions. Our findings allow us to identify a smooth but definite evolution of the nature of nonlocal correlations by increasing interaction: They either increase or decrease the kinetic energy w.r.t. DMFT depending on the interaction strength being weak or strong, respectively. This reflects the corresponding evolution of the ground state from a nesting-driven (Slater) to a superexchange-driven (Heisenberg) antiferromagnet (AF), whose fingerprints are, thus, recognizable in the spatial correlations of the paramagnetic phase. Finally, a critical analysis of our numerical results of the potential energy at the largest interaction allows us to identify possible procedures to improve the ladder-based algorithms adopted in the dynamical vertex approximation.

  13. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Werner, Hans-Joachim

    2016-11-01

    The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

  14. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals.

    PubMed

    Werner, Hans-Joachim

    2016-11-28

    The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

  15. Low-energy extensions of the eikonal approximation to heavy-ion scattering

    SciTech Connect

    Aguiar, C.E.; Aguiar, C.E.; Zardi, F.; Vitturi, A.

    1997-09-01

    We discuss different schemes devised to extend the eikonal approximation to the regime of low bombarding energies (below 50 MeV per nucleon) in heavy-ion collisions. From one side we consider the first- and second-order corrections derived from Wallace{close_quote}s expansion. As an alternative approach we examine the procedure of accounting for the distortion of the eikonal straight-line trajectory by shifting the impact parameter to the corresponding classical turning point. The two methods are tested for different combinations of colliding systems and bombarding energies, by comparing the angular distributions they provide with the exact solution of the scattering problem. We find that the best results are obtained with the shifted trajectories, the Wallace expansion showing a slow convergence at low energies, in particular for heavy systems characterized by a strong Coulomb field. {copyright} {ital 1997} {ital The American Physical Society}

  16. Minimum energy and the end of the inspiral in the post-Newtonian approximation

    NASA Astrophysics Data System (ADS)

    Cabero, Miriam; Nielsen, Alex B.; Lundgren, Andrew P.; Capano, Collin D.

    2017-03-01

    The early inspiral phase of a compact binary coalescence is well modeled by the post-Newtonian (PN) approximation to the orbital energy and gravitational wave flux. The transition from the inspiral phase to the plunge can be defined by the minimum energy circular orbit (MECO). In the extreme mass-ratio limit the PN energy equals the energy of the (post-Newtonian expanded) exact Kerr solution. However, for comparable-mass systems the MECO of the PN energy does not exist when bodies have large spins and no analytical solution to the end of the inspiral is known. By including the exact Kerr limit, we extract a well-defined minimum of the orbital energy beyond which the plunge or merger occurs. We study the hybrid condition for a number of cases of both black hole and neutron stars and compare to other commonly employed definitions. Our method can be used for any known order of the post-Newtonian series and enables the MECO condition to be used to define the end of the inspiral phase for highly spinning, comparable mass systems.

  17. Stabilization of quantum energy flows within the approximate quantum trajectory approach.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly

    2007-10-18

    The hydrodynamic, or the de Broglie-Bohm, formulation provides an alternative to the conventional time-dependent Schrödinger equation based on quantum trajectories. The trajectory dynamics scales favorably with the system size, but it is, generally, unstable due to singularities in the exact quantum potential. The approximate quantum potential based on the fitting of the nonclassical component of the momentum operator in terms of a small basis is numerically stable but can lead to inaccurate large net forces in bound systems. We propose to compensate errors in the approximate quantum potential by applying a semiempirical friction-like force. This significantly improves the description of zero-point energy in bound systems. Examples are given for one-dimensional models relevant to nuclear dynamics.

  18. Subtraction method in the second random-phase approximation: First applications with a Skyrme energy functional

    NASA Astrophysics Data System (ADS)

    Gambacurta, D.; Grasso, M.; Engel, J.

    2015-09-01

    We make use of a subtraction procedure, introduced to overcome double-counting problems in beyond-mean-field theories, in the second random-phase-approximation (SRPA) for the first time. This procedure guarantees the stability of the SRPA (so that all excitation energies are real). We show that the method fits perfectly into nuclear density-functional theory. We illustrate applications to the monopole and quadrupole response and to low-lying 0+ and 2+ states in the nucleus 16O . We show that the subtraction procedure leads to (i) results that are weakly cutoff dependent and (ii) a considerable reduction of the SRPA downwards shift with respect to the random-phase approximation (RPA) spectra (systematically found in all previous applications). This implementation of the SRPA model will allow a reliable analysis of the effects of two particle-two hole configurations (2p2h) on the excitation spectra of medium-mass and heavy nuclei.

  19. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals.

    PubMed

    Woods, Thomas N; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

    A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

  20. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals

    NASA Astrophysics Data System (ADS)

    Woods, Thomas N.; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

    2015-10-01

    A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

  1. Energy transfer in structured and unstructured environments: Master equations beyond the Born-Markov approximations

    SciTech Connect

    Iles-Smith, Jake; Dijkstra, Arend G.; Lambert, Neill; Nazir, Ahsan

    2016-01-28

    We explore excitonic energy transfer dynamics in a molecular dimer system coupled to both structured and unstructured oscillator environments. By extending the reaction coordinate master equation technique developed by Iles-Smith et al. [Phys. Rev. A 90, 032114 (2014)], we go beyond the commonly used Born-Markov approximations to incorporate system-environment correlations and the resultant non-Markovian dynamical effects. We obtain energy transfer dynamics for both underdamped and overdamped oscillator environments that are in perfect agreement with the numerical hierarchical equations of motion over a wide range of parameters. Furthermore, we show that the Zusman equations, which may be obtained in a semiclassical limit of the reaction coordinate model, are often incapable of describing the correct dynamical behaviour. This demonstrates the necessity of properly accounting for quantum correlations generated between the system and its environment when the Born-Markov approximations no longer hold. Finally, we apply the reaction coordinate formalism to the case of a structured environment comprising of both underdamped (i.e., sharply peaked) and overdamped (broad) components simultaneously. We find that though an enhancement of the dimer energy transfer rate can be obtained when compared to an unstructured environment, its magnitude is rather sensitive to both the dimer-peak resonance conditions and the relative strengths of the underdamped and overdamped contributions.

  2. Proposal for determining the energy content of gravitational waves by using approximate symmetries of differential equations

    SciTech Connect

    Hussain, Ibrar; Qadir, Asghar; Mahomed, F. M.

    2009-06-15

    Since gravitational wave spacetimes are time-varying vacuum solutions of Einstein's field equations, there is no unambiguous means to define their energy content. However, Weber and Wheeler had demonstrated that they do impart energy to test particles. There have been various proposals to define the energy content, but they have not met with great success. Here we propose a definition using 'slightly broken' Noether symmetries. We check whether this definition is physically acceptable. The procedure adopted is to appeal to 'approximate symmetries' as defined in Lie analysis and use them in the limit of the exact symmetry holding. A problem is noted with the use of the proposal for plane-fronted gravitational waves. To attain a better understanding of the implications of this proposal we also use an artificially constructed time-varying nonvacuum metric and evaluate its Weyl and stress-energy tensors so as to obtain the gravitational and matter components separately and compare them with the energy content obtained by our proposal. The procedure is also used for cylindrical gravitational wave solutions. The usefulness of the definition is demonstrated by the fact that it leads to a result on whether gravitational waves suffer self-damping.

  3. Validity of the Spin-Wave Approximation for the Free Energy of the Heisenberg Ferromagnet

    NASA Astrophysics Data System (ADS)

    Correggi, Michele; Giuliani, Alessandro; Seiringer, Robert

    2015-10-01

    We consider the quantum ferromagnetic Heisenberg model in three dimensions, for all spins S ≥ 1/2. We rigorously prove the validity of the spin-wave approximation for the excitation spectrum, at the level of the first non-trivial contribution to the free energy at low temperatures. Our proof comes with explicit, constructive upper and lower bounds on the error term. It uses in an essential way the bosonic formulation of the model in terms of the Holstein-Primakoff representation. In this language, the model describes interacting bosons with a hard-core on-site repulsion and a nearest-neighbor attraction. This attractive interaction makes the lower bound on the free energy particularly tricky: the key idea there is to prove a differential inequality for the two-particle density, which is thereby shown to be smaller than the probability density of a suitably weighted two-particle random process on the lattice.

  4. Cohesion and promotion energies in the transition metals: Implications of the local-density approximation

    NASA Astrophysics Data System (ADS)

    Watson, R. E.; Fernando, G. W.; Weinert, M.; Wang, Y. J.; Davenport, J. W.

    1991-04-01

    The accuracy of the local-density (LDA) or local-spin-density (LSDA) approximations when applied to transition metals is of great concern. Estimates of the cohesive energy compare the total energy of the solid with that of the free atom. This involves chosing the reference state of the free atom which, as a rule, will not be the free atom's ground state in LDA or LSDA. Comparing one reference state versus another, e.g., the dn-1s vs dn-2s2 for a transition metal, corresponds to calculating an s-d promotion energy Δ, which may be compared with experiment. Gunnarsson and Jones (GJ) [Phys. Rev. B 31, 7588 (1985)] found for the 3d row that the calculated Δ displayed systematic errors which they attributed to a difference in error within the LSDA in the treatment of the coupling of the outer-core electrons with the d versus non-d valence electrons. This study has been extended to relativistic calculations for the 3d, 4d, and 5d rows and for other promotions. The situation is more complicated than suggested by GJ, and its implications for cohesive energy estimates will be discussed.

  5. Development of approximate method to analyze the characteristics of latent heat thermal energy storage system

    SciTech Connect

    Saitoh, T.S.; Hoshi, Akira

    1999-07-01

    Third Conference of the Parties to the U.N. Framework Convention on Climate Change (COP3) held in last December in Kyoto urged the industrialized nation to reduce carbon dioxide (CO{sub 2}) emissions by 5.2 percent (on the average) below 1990 level until the period between 2008 and 2012 (Kyoto protocol). This implies that even for the most advanced countries like the US, Japan, and EU implementation of drastic policies and overcoming many barriers in market should be necessary. One idea which leads to a path of low carbon intensity is to adopt an energy storage concept. One of the reasons that the efficiency of the conventional energy systems has been relatively low is ascribed to lacking of energy storage subsystem. Most of the past energy systems, for example, air-conditioning system, do not have energy storage part and the system usually operates with low energy efficiency. Firstly, the effect of reducing CO{sub 2} emissions was also examined if the LHTES subsystems were incorporated in all the residential and building air-conditioning systems. Another field of application of the LHTES is of course transportation. Future vehicle will be electric or hybrid vehicle. However, these vehicles will need considerable energy for air-conditioning. The LHTES system will provide enough energy for this purpose by storing nighttime electricity or rejected heat from the radiator or motor. Melting and solidification of phase change material (PCM) in a capsule is of practical importance in latent heat thermal energy storage (LHTES) systems which are considered to be very promising to reduce a peak demand of electricity in the summer season and also reduce carbon dioxide (CO{sub 2}) emissions. Two melting modes are involved in melting in capsules. One is close-contact melting between the solid bulk and the capsule wall, and another is natural convection melting in the liquid (melt) region. Close-contact melting processes for a single enclosure have been solved using several

  6. Directed energy transfer in films of CdSe quantum dots: beyond the point dipole approximation.

    PubMed

    Zheng, Kaibo; Žídek, Karel; Abdellah, Mohamed; Zhu, Nan; Chábera, Pavel; Lenngren, Nils; Chi, Qijin; Pullerits, Tõnu

    2014-04-30

    Understanding of Förster resonance energy transfer (FRET) in thin films composed of quantum dots (QDs) is of fundamental and technological significance in optimal design of QD based optoelectronic devices. The separation between QDs in the densely packed films is usually smaller than the size of QDs, so that the simple point-dipole approximation, widely used in the conventional approach, can no longer offer quantitative description of the FRET dynamics in such systems. Here, we report the investigations of the FRET dynamics in densely packed films composed of multisized CdSe QDs using ultrafast transient absorption spectroscopy and theoretical modeling. Pairwise interdot transfer time was determined in the range of 1.5 to 2 ns by spectral analyses which enable separation of the FRET contribution from intrinsic exciton decay. A rational model is suggested by taking into account the distribution of the electronic transition densities in the dots and using the film morphology revealed by AFM images. The FRET dynamics predicted by the model are in good quantitative agreement with experimental observations without adjustable parameters. Finally, we use our theoretical model to calculate dynamics of directed energy transfer in ordered multilayer QD films, which we also observe experimentally. The Monte Carlo simulations reveal that three ideal QD monolayers can provide exciton funneling efficiency above 80% from the most distant layer. Thereby, utilization of directed energy transfer can significantly improve light harvesting efficiency of QD devices.

  7. Two-Body Approximations in the Design of Low-Energy Transfers Between Galilean Moons

    NASA Astrophysics Data System (ADS)

    Fantino, Elena; Castelli, Roberto

    Over the past two decades, the robotic exploration of the Solar System has reached the moons of the giant planets. In the case of Jupiter, a strong scientific interest towards its icy moons has motivated important space missions (e.g., ESAs' JUICE and NASA's Europa Mission). A major issue in this context is the design of efficient trajectories enabling satellite tours, i.e., visiting the several moons in succession. Concepts like the Petit Grand Tour and the Multi-Moon Orbiter have been developed to this purpose, and the literature on the subject is quite rich. The models adopted are the two-body problem (with the patched conics approximation and gravity assists) and the three-body problem (giving rise to the so-called low-energy transfers, LETs). In this contribution, we deal with the connection between two moons, Europa and Ganymede, and we investigate a two-body approximation of trajectories originating from the stable/unstable invariant manifolds of the two circular restricted three body problems, i.e., Jupiter-Ganymede and Jupiter-Europa. We develop ad-hoc algorithms to determine the intersections of the resulting elliptical arcs, and the magnitude of the maneuver at the intersections. We provide a means to perform very fast and accurate evaluations of the minimum-cost trajectories between the two moons. Eventually, we validate the methodology by comparison with numerical integrations in the three-body problem.

  8. Nuclear energy surfaces at high-spin in the A{approximately}180 mass region

    SciTech Connect

    Chasman, R.R.; Egido, J.L.; Robledo, L.M.

    1995-08-01

    We are studying nuclear energy surfaces at high spin, with an emphasis on very deformed shapes using two complementary methods: (1) the Strutinsky method for making surveys of mass regions and (2) Hartree-Fock calculations using a Gogny interaction to study specific nuclei that appear to be particularly interesting from the Strutinsky method calculations. The great advantage of the Strutinsky method is that one can study the energy surfaces of many nuclides ({approximately}300) with a single set of calculations. Although the Hartree-Fock calculations are quite time-consuming relative to the Strutinsky calculations, they determine the shape at a minimum without being limited to a few deformation modes. We completed a study of {sup 182}Os using both approaches. In our cranked Strutinsky calculations, which incorporate a necking mode deformation in addition to quadrupole and hexadecapole deformations, we found three well-separated, deep, strongly deformed minima. The first is characterized by nuclear shapes with axis ratios of 1.5:1; the second by axis ratios of 2.2:1 and the third by axis ratios of 2.9:1. We also studied this nuclide with the density-dependent Gogny interaction at I = 60 using the Hartree-Fock method and found minima characterized by shapes with axis ratios of 1.5:1 and 2.2:1. A comparison of the shapes at these minima, generated in the two calculations, shows that the necking mode of deformation is extremely useful for generating nuclear shapes at large deformation that minimize the energy. The Hartree-Fock calculations are being extended to larger deformations in order to further explore the energy surface in the region of the 2.9:1 minimum.

  9. Discrete Dipole Approximation for Low-Energy Photoelectron Emission from NaCl Nanoparticles

    SciTech Connect

    Berg, Matthew J.; Wilson, Kevin R.; Sorensen, Chris; Chakrabarti, Amit; Ahmed, Musahid

    2011-09-22

    This work presents a model for the photoemission of electrons from sodium chloride nanoparticles 50-500 nm in size, illuminated by vacuum ultraviolet light with energy ranging from 9.4-10.9 eV. The discrete dipole approximation is used to calculate the electromagnetic field inside the particles, from which the two-dimensional angular distribution of emitted electrons is simulated. The emission is found to favor the particle?s geometrically illuminated side, and this asymmetry is compared to previous measurements performed at the Lawrence Berkeley National Laboratory. By modeling the nanoparticles as spheres, the Berkeley group is able to semi-quantitatively account for the observed asymmetry. Here however, the particles are modeled as cubes, which is closer to their actual shape, and the interaction of an emitted electron with the particle surface is also considered. The end result shows that the emission asymmetry for these low-energy electrons is more sensitive to the particle-surface interaction than to the specific particle shape, i.e., a sphere or cube.

  10. Free energy of contact formation in proteins: Efficient computation in the elastic network approximation

    NASA Astrophysics Data System (ADS)

    Hamacher, Kay

    2011-07-01

    Biomolecular simulations have become a major tool in understanding biomolecules and their complexes. However, one can typically only investigate a few mutants or scenarios due to the severe computational demands of such simulations, leading to a great interest in method development to overcome this restriction. One way to achieve this is to reduce the complexity of the systems by an approximation of the forces acting upon the constituents of the molecule. The harmonic approximation used in elastic network models simplifies the physical complexity to the most reduced dynamics of these molecular systems. The reduced polymer modeled this way is typically comprised of mass points representing coarse-grained versions of, e.g., amino acids. In this work, we show how the computation of free energy contributions of contacts between two residues within the molecule can be reduced to a simple lookup operation in a precomputable matrix. Being able to compute such contributions is of great importance: protein design or molecular evolution changes introduce perturbations to these pair interactions, so we need to understand their impact. Perturbation to the interactions occurs due to randomized and fixated changes (in molecular evolution) or designed modifications of the protein structures (in bioengineering). These perturbations are modifications in the topology and the strength of the interactions modeled by the elastic network models. We apply the new algorithm to (1) the bovine trypsin inhibitor, a well-known enzyme in biomedicine, and show the connection to folding properties and the hydrophobic collapse hypothesis and (2) the serine proteinase inhibitor CI-2 and show the correlation to Φ values to characterize folding importance. Furthermore, we discuss the computational complexity and show empirical results for the average case, sampled over a library of 77 structurally diverse proteins. We found a relative speedup of up to 10 000-fold for large proteins with respect to

  11. Approximate Time to Steady-state Resting Energy Expenditure Using Indirect Calorimetry in Young, Healthy Adults.

    PubMed

    Popp, Collin J; Tisch, Jocelyn J; Sakarcan, Kenan E; Bridges, William C; Jesch, Elliot D

    2016-01-01

    Indirect calorimetry (IC) measurements to estimate resting energy expenditure (REE) necessitate a stable measurement period or steady state (SS). There is limited evidence when assessing the time to reach SS in young, healthy adults. The aims of this prospective study are to determine the approximate time to necessary reach SS using open-circuit IC and to establish the appropriate duration of SS needed to estimate REE. One hundred young, healthy participants (54 males and 46 females; age = 20.6 ± 2.1 years; body weight = 73.6 ± 16.3 kg; height 172.5 ± 9.3 cm; BMI = 24.5 ± 3.8 kg/m(2)) completed IC measurement for approximately 30 min while the volume of oxygen (VO2) and volume of carbon dioxide (VCO2) were collected. SS was defined by variations in the VO2 and VCO2 of ≤10% coefficient of variation (%CV) over a period of five consecutive minutes. The 30-min IC measurement was divided into six 5-min segments, such as S1, S2, S3, S4, S5, and S6. The results show that SS was achieved during S2 (%CV = 6.81 ± 3.2%), and the %CV continued to met the SS criteria for the duration of the IC measurement (S3 = 8.07 ± 4.4%, S4 = 7.93 ± 3.7%, S5 = 7.75 ± 4.1%, and S6 = 8.60 ± 4.6%). The current study found that in a population of young, healthy adults the duration of the IC measurement period could be a minimum of 10 min. The first 5-min segment was discarded, while SS occurred by the second 5-min segment.

  12. Approximate Time to Steady-state Resting Energy Expenditure Using Indirect Calorimetry in Young, Healthy Adults

    PubMed Central

    Popp, Collin J.; Tisch, Jocelyn J.; Sakarcan, Kenan E.; Bridges, William C.; Jesch, Elliot D.

    2016-01-01

    Indirect calorimetry (IC) measurements to estimate resting energy expenditure (REE) necessitate a stable measurement period or steady state (SS). There is limited evidence when assessing the time to reach SS in young, healthy adults. The aims of this prospective study are to determine the approximate time to necessary reach SS using open-circuit IC and to establish the appropriate duration of SS needed to estimate REE. One hundred young, healthy participants (54 males and 46 females; age = 20.6 ± 2.1 years; body weight = 73.6 ± 16.3 kg; height 172.5 ± 9.3 cm; BMI = 24.5 ± 3.8 kg/m2) completed IC measurement for approximately 30 min while the volume of oxygen (VO2) and volume of carbon dioxide (VCO2) were collected. SS was defined by variations in the VO2 and VCO2 of ≤10% coefficient of variation (%CV) over a period of five consecutive minutes. The 30-min IC measurement was divided into six 5-min segments, such as S1, S2, S3, S4, S5, and S6. The results show that SS was achieved during S2 (%CV = 6.81 ± 3.2%), and the %CV continued to met the SS criteria for the duration of the IC measurement (S3 = 8.07 ± 4.4%, S4 = 7.93 ± 3.7%, S5 = 7.75 ± 4.1%, and S6 = 8.60 ± 4.6%). The current study found that in a population of young, healthy adults the duration of the IC measurement period could be a minimum of 10 min. The first 5-min segment was discarded, while SS occurred by the second 5-min segment. PMID:27857943

  13. High-intensity interval exercise induces 24-h energy expenditure similar to traditional endurance exercise despite reduced time commitment.

    PubMed

    Skelly, Lauren E; Andrews, Patricia C; Gillen, Jenna B; Martin, Brian J; Percival, Michael E; Gibala, Martin J

    2014-07-01

    Subjects performed high-intensity interval training (HIIT) and continuous moderate-intensity training (END) to evaluate 24-h oxygen consumption. Oxygen consumption during HIIT was lower versus END; however, total oxygen consumption over 24 h was similar. These data demonstrate that HIIT and END induce similar 24-h energy expenditure, which may explain the comparable changes in body composition reported despite lower total training volume and time commitment.

  14. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Narulkar, R.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Agrawal, P. M.; Komanduri, R.

    2009-05-01

    A general method for the development of potential-energy hypersurfaces is presented. The method combines a many-body expansion to represent the potential-energy surface with two-layer neural networks (NN) for each M-body term in the summations. The total number of NNs required is significantly reduced by employing a moiety energy approximation. An algorithm is presented that efficiently adjusts all the coupled NN parameters to the database for the surface. Application of the method to four different systems of increasing complexity shows that the fitting accuracy of the method is good to excellent. For some cases, it exceeds that available by other methods currently in literature. The method is illustrated by fitting large databases of ab initio energies for Sin(n =3,4,…,7) clusters obtained from density functional theory calculations and for vinyl bromide (C2H3Br) and all products for dissociation into six open reaction channels (12 if the reverse reactions are counted as separate open channels) that include C-H and C-Br bond scissions, three-center HBr dissociation, and three-center H2 dissociation. The vinyl bromide database comprises the ab initio energies of 71 969 configurations computed at MP4(SDQ) level with a 6-31G(d,p) basis set for the carbon and hydrogen atoms and Huzinaga's (4333/433/4) basis set augmented with split outer s and p orbitals (43321/4321/4) and a polarization f orbital with an exponent of 0.5 for the bromine atom. It is found that an expansion truncated after the three-body terms is sufficient to fit the Si5 system with a mean absolute testing set error of 5.693×10-4 eV. Expansions truncated after the four-body terms for Sin(n =3,4,5) and Sin(n =3,4,…,7) provide fits whose mean absolute testing set errors are 0.0056 and 0.0212 eV, respectively. For vinyl bromide, a many-body expansion truncated after the four-body terms provides fitting accuracy with mean absolute testing set errors that range between 0.0782 and 0.0808 eV. These

  15. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    SciTech Connect

    Singh, Kunwar Pal; Arya, Rashmi; Malik, Anil K.

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarized laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.

  16. Excitation energies and potential energy curves for the 19 excited electronic terms of CH: Efficiency examination of the multireference first-order polarization propagator approximation

    NASA Astrophysics Data System (ADS)

    Seleznev, Alexey O.; Khrustov, Vladimir F.; Stepanov, Nikolay F.

    2013-11-01

    The attainability of a uniform precision level for estimates of electronic transition characteristics through the multireference first-order polarization propagator approximation (MR-FOPPA) was examined under extension of a basis set, using the CH ion as an example. The transitions from the ground electronic state to the 19 excited electronic terms were considered. Balanced approximations for (i) transition energies to the studied excited states and (ii) forms and relative dispositions of their potential energy curves were attained in the 3-21G and 6-311G (d,p) basis sets. In both the basis sets, a balanced approximation for the corresponding transition moments was not achieved.

  17. Casimir bag energy in the stochastic approximation to the pure QCD vacuum

    SciTech Connect

    Fosco, C. D.; Oxman, L. E.

    2007-01-15

    We study the Casimir contribution to the bag energy coming from gluon field fluctuations, within the context of the stochastic vacuum model of pure QCD. After formulating the problem in terms of the generating functional of field strength cumulants, we argue that the resulting predictions about the Casimir energy are compatible with the phenomenologically required bag energy term.

  18. Testing the nonlocal kinetic energy functional of an inhomogeneous, two-dimensional degenerate Fermi gas within the average density approximation

    NASA Astrophysics Data System (ADS)

    Towers, J.; van Zyl, B. P.; Kirkby, W.

    2015-08-01

    In a recent paper [B. P. van Zyl et al., Phys. Rev. A 89, 022503 (2014), 10.1103/PhysRevA.89.022503], the average density approximation (ADA) was implemented to develop a parameter-free, nonlocal kinetic energy functional to be used in the orbital-free density functional theory of an inhomogeneous, two-dimensional (2D) Fermi gas. In this work, we provide a detailed comparison of self-consistent calculations within the ADA with the exact results of the Kohn-Sham density functional theory and the elementary Thomas-Fermi (TF) approximation. We demonstrate that the ADA for the 2D kinetic energy functional works very well under a wide variety of confinement potentials, even for relatively small particle numbers. Remarkably, the TF approximation for the kinetic energy functional, without any gradient corrections, also yields good agreement with the exact kinetic energy for all confining potentials considered, although at the expense of the spatial and kinetic energy densities exhibiting poor pointwise agreement, particularly near the TF radius. Our findings illustrate that the ADA kinetic energy functional yields accurate results for both the local and global equilibrium properties of an inhomogeneous 2D Fermi gas, without the need for any fitting parameters.

  19. Interval arithmetic in calculations

    NASA Astrophysics Data System (ADS)

    Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima

    2016-10-01

    Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

  20. A new approach to detect congestive heart failure using Teager energy nonlinear scatter plot of R-R interval series.

    PubMed

    Kamath, Chandrakar

    2012-09-01

    A novel approach to distinguish congestive heart failure (CHF) subjects from healthy subjects is proposed. Heart rate variability (HRV) is impaired in CHF subjects. In this work hypothesizing that capturing moment to moment nonlinear dynamics of HRV will reveal cardiac patterning, we construct the nonlinear scatter plot for Teager energy of R-R interval series. The key feature of Teager energy is that it models the energy of the source that generated the signal rather than the energy of the signal itself. Hence, any deviations in the genesis of HRV, by complex interactions of hemodynamic, electrophysiological, and humoral variables, as well as by the autonomic and central nervous regulations, get manifested in the Teager energy function. Comparison of the Teager energy scatter plot with the second-order difference plot (SODP) for normal and CHF subjects reveals significant differences qualitatively and quantitatively. We introduce the concept of curvilinearity for central tendency measures of the plots and define a radial distance index that reveals the efficacy of the Teager energy scatter plot over SODP in separating CHF subjects from healthy subjects. The k-nearest neighbor classifier with RDI as feature showed almost 100% classification rate.

  1. The Mean Trajectory Approximation for Charge and Energy Transfer Processes at Surfaces.

    DTIC Science & Technology

    1985-03-01

    surface. The simplest process of this kind is the ionization/neutralization of an atom/ ion incident on a metal surface. 1-7 More complicated examples...charge transfer process. 1 2 Auger neutralization of an incident ion - 3 is a still more involved example. Other important orocesses that may be described... ion scattering work is carried out at high kinetic energy the low kinetic energy regime is very important in electron stimulated desorption and in the

  2. Influence of birth interval and child labour on family energy requirements and dependency ratios in two traditional subsistence economies in Africa.

    PubMed

    Ulijaszek, S J

    1993-01-01

    The consequences of different birth intervals on dietary energy requirements and dependency ratios at different stages of the family lifecycle are modelled for Gambian agriculturalists and !Kung hunter-gatherers. Energy requirements reach a peak at between 20 and 30 years after starting a family for the Gambians, and between 15 and 20 years for the !Kung. For the Gambians, shorter birth interval confers no economic advantage over the traditional birth interval of 30 months. For the !Kung, the lack of participation in subsistence activities by children gives an output:input ratio in excess of that reported in other studies, suggesting that they are in a state of chronic energy deficiency.

  3. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Zhao, Jia; Wang, Qi

    2017-03-01

    The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg-Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the "Invariant Energy Quadratization" (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.

  4. The Determination of the Spectrum Energy on the model of DNA-protein interactions using WKB approximation method

    NASA Astrophysics Data System (ADS)

    Syahroni, Edy; Suparmi, A.; Cari, C.

    2017-01-01

    The spectrum energy’s equation for Killingback potential on the model of DNA and protein interactions was obtained using WKB approximation method. The Killingbeck potential was substituted into the general equation of WKB approximation method to determine the energy. The general equation required the value of critical turning point to complete the form equation. In this research, the general form of Killingbeck potential was causing the equation of critical turning point turn into cube equation. In this case we take the value of critical turning point only with the real value. In mathematical condition, it was satisfied with requirement Discriminant was less than or equal to 0. If D=0, it would give two values of critical turning point and if D<0, it would give three values of critical turning point. In this research we present both of those requirements to complete the general Equation of Energy.

  5. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

    NASA Astrophysics Data System (ADS)

    Krause, Katharina; Klopper, Wim

    2013-11-01

    Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn-Sham calculation accounting for spin-orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn-Sham calculations.

  6. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

    SciTech Connect

    Krause, Katharina; Klopper, Wim

    2013-11-21

    Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn–Sham calculation accounting for spin–orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn–Sham calculations.

  7. Minimum-Energy Flight Paths for UAVs Using Mesoscale Wind Forecasts and Approximate Dynamic Programming

    DTIC Science & Technology

    2007-12-01

    24) ( , ) ( ) ij arc i j D x dx DX=∫ . All of the processes that lead to loss of energy, including drag, will eventually be translated to heating of...the wind field as a function of altitude, up to 10 km, according to a 160 km x 160 km COAMPS forecast over Yucca, Nevada test site, for (a) wind...for two cases: (a) | | 90ijβ γ− ≤ and (b) | | 90ijβ γ− ≥ ..................................................27  x Figure 15.  A circular turn from

  8. Iterative and direct methods employing distributed approximating functionals for the reconstruction of a potential energy surface from its sampled values

    NASA Astrophysics Data System (ADS)

    Szalay, Viktor

    1999-11-01

    The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.

  9. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    PubMed

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-08

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  10. Brownian motors in the low-energy approximation: Classification and properties

    SciTech Connect

    Rozenbaum, V. M.

    2010-04-15

    We classify Brownian motors based on the expansion of their velocity in terms of the reciprocal friction coefficient. The two main classes of motors (with dichotomic fluctuations in homogeneous force and periodic potential energy) are characterized by different analytical dependences of their mean velocity on the spatial and temporal asymmetry coefficients and by different adiabatic limits. The competition between the spatial and temporal asymmetries gives rise to stopping points. The transition through these points can be achieved by varying the asymmetry coefficients, temperature, and other motor parameters, which can be used, for example, for nanoparticle segregation. The proposed classification separates out a new type of motors based on synchronous fluctuations in symmetric potential and applied homogeneous force. As an example of this type of motors, we consider a near-surface motor whose two-dimensional motion (parallel and perpendicular to the substrate plane) results from fluctuations in external force inclined to the surface.

  11. Energy-loss function in the two-pair approximation for the electron liquid

    NASA Astrophysics Data System (ADS)

    Bachlechner, M. E.; Holas, A.; Böhm, H. M.; Schinner, A.

    1996-07-01

    The imaginary part of the proper polarizability, Im Π, arising due to excitations of two electron-hole pairs, is studied in detail for electron systems of arbitrary dimensionality, and taking into account arbitrary degeneracy of the electron bands. This allows an application to semiconductors with degenerate valleys, and to ferromagnetic metals. The results obtained not only confirm expressions already known for paramagnetic systems in the high-frequency region, but are also rigorously shown to be valid for all frequencies outside the particle-hole continuum. For a sufficiently high momentum transfer a cutoff frequency (below which Im Π=0) is established for not only two-pair but also any n-pair processes. In contrast, there is no upper cutoff for n>~1. The energy-loss function, including the discussed two-pair contributions, is calculated. The effects of screening are investigated. Numerical results, illustrating various aspects and properties of this function, especially showing finite-width plasmon peaks, are obtained for a two-dimensional electron gas.

  12. Thermal decay analysis of fiber Bragg gratings at different temperature annealing rates using demarcation energy approximation

    NASA Astrophysics Data System (ADS)

    Gunawardena, Dinusha Serandi; Lai, Man-Hong; Lim, Kok-Sing; Ahmad, Harith

    2017-03-01

    In this study the thermal degradation of gratings inscribed in three types of fiber namely, PS 1250/1500, SM 1500 and zero water peak single mode fiber is demonstrated. A comparative investigation is carried out on the aging characteristics of the gratings at three different temperature ramping rates of 3 °C/min, 6 °C/min and 9 °C/min. During the thermal annealing treatment, a significant enhancement in the grating reflectivity is observed for PS 1250/1500 fiber from ∼1.2 eV until 1.4 eV which indicates a thermal induced reversible effect. Higher temperature ramping rates lead to a higher regeneration temperature. In addition, the investigation also reflects that regardless of the temperature ramping rate the thermal decay behavior of a specific fiber can be successfully characterized when represented in a demarcation energy domain. Moreover, this technique can be accommodated when predicting the thermal decay characteristics of a specific fiber.

  13. Vibration-translation energy transfer in anharmonic diatomic molecules. 1: A critical evaluation of the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Mckenzie, R. L.

    1974-01-01

    The semiclassical approximation is applied to anharmonic diatomic oscillators in excited initial states. Multistate numerical solutions giving the vibrational transition probabilities for collinear collisions with an inert atom are compared with equivalent, exact quantum-mechanical calculations. Several symmetrization methods are shown to correlate accurately the predictions of both theories for all initial states, transitions, and molecular types tested, but only if coupling of the oscillator motion and the classical trajectory of the incident particle is considered. In anharmonic heteronuclear molecules, the customary semiclassical method of computing the classical trajectory independently leads to transition probabilities with anomalous low-energy resonances. Proper accounting of the effects of oscillator compression and recoil on the incident particle trajectory removes the anomalies and restores the applicability of the semiclassical approximation.

  14. High-energy scattering amplitudes of Yang-Mills theories in generalized leading-term approximation and eikonal formulas

    SciTech Connect

    Lo, C.Y.

    1981-01-15

    In this paper, we study the apparent discrepancy between Feynman diagrams and the eikonal formulas, and the apparent paradox between the eikonal formulas and the s-u crossing symmetry. We analyze the generalized leading-term approximation (GLA), which generates the terms of the eikonal formulas from Feynman diagrams. This analysis is done through using the techniques of decomposing diagrammatically the isospin factors (or group-theoretical weights in general) of Feynman diagrams. As a result, we modify the GLA into a generalized complex leading-term approximation. We calculate, with this new formalism, the high-energy limit (s..-->..infinity with t fixed) of the vector-meson--vector-meson elastic amplitude of a Yang-Mills theory with SU(2) symmetry through tenth perturbative order. With this new method, we resolve the apparent discrepancy and paradox mentioned above. This method is generalizable to other non-Abelian gauge theories.

  15. On the approximate albedo boundary conditions for two-energy group X,Y-geometry discrete ordinates eigenvalue problems

    SciTech Connect

    Nunes, C. E. A.; Alves Filho, H.; Barros, R. C.

    2012-07-01

    We discuss in this paper the computational efficiency of approximate discrete ordinates (SN) albedo boundary conditions for two-energy group eigenvalue problems in X,Y-geometry. The non-standard SN albedo substitutes approximately the reflector system around the active domain, as we neglect the transverse leakage terms within the non-multiplying reflector region. Should the problem have no transverse leakage terms, i.e., one-dimensional slab geometry, then the offered albedo boundary conditions are exact. By computational efficiency we mean analyzing the accuracy of the numerical results versus the CPU execution time of each run for a given model problem. Numerical results to a typical test problem are shown to illustrate this efficiency analysis. (authors)

  16. Anharmonic free energies and phonon dispersions from the stochastic self-consistent harmonic approximation: Application to platinum and palladium hydrides

    NASA Astrophysics Data System (ADS)

    Errea, Ion; Calandra, Matteo; Mauri, Francesco

    2014-02-01

    Harmonic calculations based on density-functional theory are generally the method of choice for the description of phonon spectra of metals and insulators. The inclusion of anharmonic effects is, however, delicate as it relies on perturbation theory requiring a considerable amount of computer time, fast increasing with the cell size. Furthermore, perturbation theory breaks down when the harmonic solution is dynamically unstable or the anharmonic correction of the phonon energies is larger than the harmonic frequencies themselves. We present here a stochastic implementation of the self-consistent harmonic approximation valid to treat anharmonicity at any temperature in the nonperturbative regime. The method is based on the minimization of the free energy with respect to a trial density matrix described by an arbitrary harmonic Hamiltonian. The minimization is performed with respect to all the free parameters in the trial harmonic Hamiltonian, namely, equilibrium positions, phonon frequencies, and polarization vectors. The gradient of the free energy is calculated following a stochastic procedure. The method can be used to calculate thermodynamic properties, dynamical properties, and even anharmonic corrections to the Eliashberg function of the electron-phonon coupling. The scaling with the system size is greatly improved with respect to perturbation theory. The validity of the method is demonstrated in the strongly anharmonic palladium and platinum hydrides. In both cases, we predict a strong anharmonic correction to the harmonic phonon spectra, far beyond the perturbative limit. In palladium hydrides, we calculate thermodynamic properties beyond the quasiharmonic approximation, while in PtH, we demonstrate that the high superconducting critical temperatures at 100 GPa predicted in previous calculations based on the harmonic approximation are strongly suppressed when anharmonic effects are included.

  17. Lateral distribution of high energy muons in EAS of sizes Ne approximately equals 10(5) and Ne approximately equals 10(6)

    NASA Technical Reports Server (NTRS)

    Bazhutov, Y. N.; Ermakov, G. G.; Fomin, G. G.; Isaev, V. I.; Jarochkina, Z. V.; Kalmykov, N. N.; Khrenov, B. A.; Khristiansen, G. B.; Kulikov, G. V.; Motova, M. V.

    1985-01-01

    Muon energy spectra and muon lateral distribution in EAS were investigated with the underground magnetic spectrometer working as a part of the extensive air showers (EAS) array. For every registered muon the data on EAS are analyzed and the following EAS parameters are obtained, size N sub e, distance r from the shower axis to muon, age parameter s. The number of muons with energy over some threshold E associated to EAS of fixed parameters are measured, I sub reg. To obtain traditional characteristics, muon flux densities as a function of the distance r and muon energy E, muon lateral distribution and energy spectra are discussed for hadron-nucleus interaction model and composition of primary cosmic rays.

  18. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow

    NASA Technical Reports Server (NTRS)

    Shirts, R. B.; Reinhardt, W. P.

    1982-01-01

    Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.

  19. Potential-Energy Surfaces, the Born-Oppenheimer Approximations, and the Franck-Condon Principle: Back to the Roots.

    PubMed

    Mustroph, Heinz

    2016-09-05

    The concept of a potential-energy surface (PES) is central to our understanding of spectroscopy, photochemistry, and chemical kinetics. However, the terminology used in connection with the basic approximations is variously, and somewhat confusingly, represented with such phrases as "adiabatic", "Born-Oppenheimer", or "Born-Oppenheimer adiabatic" approximation. Concerning the closely relevant and important Franck-Condon principle (FCP), the IUPAC definition differentiates between a classical and quantum mechanical formulation. Consequently, in many publications we find terms such as "Franck-Condon (excited) state", or a vertical transition to the "Franck-Condon point" with the "Franck-Condon geometry" that relaxes to the excited-state equilibrium geometry. The Born-Oppenheimer approximation and the "classical" model of the Franck-Condon principle are typical examples of misused terms and lax interpretations of the original theories. In this essay, we revisit the original publications of pioneers of the PES concept and the FCP to help stimulate a lively discussion and clearer thinking around these important concepts.

  20. Accurate and efficient representation of intramolecular energy in ab initio generation of crystal structures. I. Adaptive local approximate models.

    PubMed

    Sugden, Isaac; Adjiman, Claire S; Pantelides, Constantinos C

    2016-12-01

    The global search stage of crystal structure prediction (CSP) methods requires a fine balance between accuracy and computational cost, particularly for the study of large flexible molecules. A major improvement in the accuracy and cost of the intramolecular energy function used in the CrystalPredictor II [Habgood et al. (2015). J. Chem. Theory Comput. 11, 1957-1969] program is presented, where the most efficient use of computational effort is ensured via the use of adaptive local approximate model (LAM) placement. The entire search space of the relevant molecule's conformations is initially evaluated using a coarse, low accuracy grid. Additional LAM points are then placed at appropriate points determined via an automated process, aiming to minimize the computational effort expended in high-energy regions whilst maximizing the accuracy in low-energy regions. As the size, complexity and flexibility of molecules increase, the reduction in computational cost becomes marked. This improvement is illustrated with energy calculations for benzoic acid and the ROY molecule, and a CSP study of molecule (XXVI) from the sixth blind test [Reilly et al. (2016). Acta Cryst. B72, 439-459], which is challenging due to its size and flexibility. Its known experimental form is successfully predicted as the global minimum. The computational cost of the study is tractable without the need to make unphysical simplifying assumptions.

  1. Post-mortem interval estimation of human skeletal remains by micro-computed tomography, mid-infrared microscopic imaging and energy dispersive X-ray mapping

    PubMed Central

    Hatzer-Grubwieser, P.; Bauer, C.; Parson, W.; Unterberger, S. H.; Kuhn, V.; Pemberger, N.; Pallua, Anton K.; Recheis, W.; Lackner, R.; Stalder, R.; Pallua, J. D.

    2015-01-01

    In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach. PMID:25878731

  2. Post-mortem interval estimation of human skeletal remains by micro-computed tomography, mid-infrared microscopic imaging and energy dispersive X-ray mapping.

    PubMed

    Longato, S; Wöss, C; Hatzer-Grubwieser, P; Bauer, C; Parson, W; Unterberger, S H; Kuhn, V; Pemberger, N; Pallua, Anton K; Recheis, W; Lackner, R; Stalder, R; Pallua, J D

    2015-04-07

    In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach.

  3. Minimax rational approximation of the Fermi-Dirac distribution

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-27

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

  4. Minimax rational approximation of the Fermi-Dirac distribution

    SciTech Connect

    Moussa, Jonathan E.

    2016-10-27

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

  5. Spin-unrestricted random-phase approximation with range separation: Benchmark on atomization energies and reaction barrier heights

    SciTech Connect

    Mussard, Bastien; Reinhardt, Peter; Toulouse, Julien; Ángyán, János G.

    2015-04-21

    We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Szabo and Ostlund [J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism, provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse et al., J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.

  6. Validity of the relativistic impulse approximation for elastic proton-nucleus scattering at energies lower than 200 MeV

    SciTech Connect

    Li, Z. P.; Hillhouse, G. C.; Meng, J.

    2008-07-15

    We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.

  7. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  8. Approximating Optimal Behavioural Strategies Down to Rules-of-Thumb: Energy Reserve Changes in Pairs of Social Foragers

    PubMed Central

    Rands, Sean A.

    2011-01-01

    Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938

  9. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGES

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  10. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    SciTech Connect

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and double excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.

  11. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Du; Yang, Weitao

    2016-10-01

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and double excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K4), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.

  12. Electron-Phonon Coupling and Energy Flow in a Simple Metal beyond the Two-Temperature Approximation

    NASA Astrophysics Data System (ADS)

    Waldecker, Lutz; Bertoni, Roman; Ernstorfer, Ralph; Vorberger, Jan

    2016-04-01

    The electron-phonon coupling and the corresponding energy exchange are investigated experimentally and by ab initio theory in nonequilibrium states of the free-electron metal aluminium. The temporal evolution of the atomic mean-squared displacement in laser-excited thin freestanding films is monitored by femtosecond electron diffraction. The electron-phonon coupling strength is obtained for a range of electronic and lattice temperatures from density functional theory molecular dynamics simulations. The electron-phonon coupling parameter extracted from the experimental data in the framework of a two-temperature model (TTM) deviates significantly from the ab initio values. We introduce a nonthermal lattice model (NLM) for describing nonthermal phonon distributions as a sum of thermal distributions of the three phonon branches. The contributions of individual phonon branches to the electron-phonon coupling are considered independently and found to be dominated by longitudinal acoustic phonons. Using all material parameters from first-principles calculations except the phonon-phonon coupling strength, the prediction of the energy transfer from electrons to phonons by the NLM is in excellent agreement with time-resolved diffraction data. Our results suggest that the TTM is insufficient for describing the microscopic energy flow even for simple metals like aluminium and that the determination of the electron-phonon coupling constant from time-resolved experiments by means of the TTM leads to incorrect values. In contrast, the NLM describing transient phonon populations by three parameters appears to be a sufficient model for quantitatively describing electron-lattice equilibration in aluminium. We discuss the general applicability of the NLM and provide a criterion for the suitability of the two-temperature approximation for other metals.

  13. Interval Training

    MedlinePlus

    ... before trying any type of interval training. Recent studies suggest, however, that interval training can be used safely for short periods even in individuals with heart disease. Also keep the risk of overuse injury in mind. If you rush into a strenuous workout before ...

  14. An interval-possibilistic basic-flexible programming method for air quality management of municipal energy system through introducing electric vehicles.

    PubMed

    Yu, L; Li, Y P; Huang, G H; Shan, B G

    2017-03-25

    Contradictions of sustainable transportation development and environmental issues have been aggravated significantly and been one of the major concerns for energy systems planning and management. A heavy emphasis is placed on stimulation of electric vehicles (EVs) to handle these problems associated with various complexities and uncertainties in municipal energy system (MES). In this study, an interval-possibilistic basic-flexible programming (IPBFP) method is proposed for planning MES of Qingdao, where uncertainties expressed as interval-flexible variables and interval-possibilistic parameters can be effectively reflected. Support vector regression (SVR) is used for predicting electricity demand of the city under various scenarios. Solutions of EVs stimulation levels and satisfaction levels in association with flexible constraints and predetermined necessity degrees are analyzed, which can help identify the optimized energy-supply patterns that could plunk for improvement of air quality and hedge against violation of soft constraints. Results disclose that largely developing EVs can help facilitate the city's energy system with an environment-effective way. However, compared to the rapid growth of transportation, the EVs' contribution of improving the city's air quality is limited. It is desired that, to achieve an environmentally sustainable MES, more concerns should be focused on the integration of increasing renewable energy resources, stimulating EVs as well as improving energy transmission, transport and storage.

  15. Fourth-grade children's dietary recall accuracy for energy intake at school meals differs by social desirability and body mass index percentile in a study concerning retention interval.

    PubMed

    Guinn, Caroline H; Baxter, Suzanne D; Royer, Julie A; Hardin, James W; Mackelprang, Alyssa J; Smith, Albert F

    2010-05-01

    Data from a study concerning retention interval and school-meal observation on children's dietary recalls were used to investigate relationships of social desirability score (SDS) and body mass index percentile (BMI%) to recall accuracy for energy for observed (n = 327) children, and to reported energy for observed and unobserved (n = 152) children. Report rates (reported/observed) correlated negatively with SDS and BMI%. Correspondence rates (correctly reported/observed) correlated negatively with SDS. Inflation ratios (overreported/observed) correlated negatively with BMI%. The relationship between reported energy and each of SDS and BMI% did not depend on observation status. Studies utilizing children's dietary recalls should assess SDS and BMI%.

  16. Estimating the Gibbs energy of hydration from molecular dynamics trajectories obtained by integral equations of the theory of liquids in the RISM approximation

    NASA Astrophysics Data System (ADS)

    Tikhonov, D. A.; Sobolev, E. V.

    2011-04-01

    A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.

  17. Energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel potential: Comparison with the cut-off theory

    NASA Astrophysics Data System (ADS)

    Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

    2000-04-01

    An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

  18. Energy-averaged electron-ion momentum transport cross section in the Born Approximation and Debye-Hückel potential: Comparison with the cut-off theory

    NASA Astrophysics Data System (ADS)

    Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

    2000-02-01

    An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

  19. Discussion on the energy content of the galactic dark matter Bose-Einstein condensate halo in the Thomas-Fermi approximation

    SciTech Connect

    De Souza, J.C.C.; Pires, M.O.C. E-mail: marcelo.pires@ufabc.edu.br

    2014-03-01

    We show that the galactic dark matter halo, considered composed of an axionlike particles Bose-Einstein condensate [6] trapped by a self-graviting potential [5], may be stable in the Thomas-Fermi approximation since appropriate choices for the dark matter particle mass and scattering length are made. The demonstration is performed by means of the calculation of the potential, kinetic and self-interaction energy terms of a galactic halo described by a Boehmer-Harko density profile. We discuss the validity of the Thomas-Fermi approximation for the halo system, and show that the kinetic energy contribution is indeed negligible.

  20. Approximate expression for the potential energy of the double-layer interaction between two parallel ion-penetrable membranes at small separations in an electrolyte solution.

    PubMed

    Ohshima, Hiroyuki

    2010-10-01

    An approximate expression for the potential energy of the double-layer interaction between two parallel similar ion-penetrable membranes in a symmetrical electrolyte solution is derived via a linearization method, in which the nonlinear Poisson-Boltzmann equations in the regions inside and outside the membranes are linearized with respect to the deviation of the electric potential from the Donnan potential. This approximation works quite well for small membrane separations h for all values of the density of fixed charges in the membranes (or the Donnan potential) and gives a correct limiting form of the interaction energy (or the interaction force) as h-->0.

  1. Approximate forms of the pair-density-functional kinetic energy on the basis of a rigorous expression with coupling-constant integration

    NASA Astrophysics Data System (ADS)

    Higuchi, Katsuhiko; Higuchi, Masahiko

    2014-12-01

    We propose approximate kinetic energy (KE) functionals of the pair-density (PD)-functional theory on the basis of the rigorous expression with the coupling-constant integration (RECCI) that has been recently derived [Phys. Rev. A 85, 062508 (2012), 10.1103/PhysRevA.85.062508]. These approximate functionals consist of the noninteracting KE and correlation energy terms. It is found that the Thomas-Fermi-Weizsäcker functional is shown to be better as the noninteracting KE term than the Thomas-Fermi and Gaussian model functionals. It is also shown that the correlation energy term is also indispensable for the reduction of the KE error, i.e., reductions of both inappropriateness of the approximate functional and error of the resultant PD. Concerning the correlation energy term, we further propose an approximate functional in addition to using the existing familiar functionals. This functional satisfies the scaling property of the KE functional, and yields a reasonable PD in a sense that the KE, electron-electron interaction, and potentials energies tend to be improved with satisfying the virial theorem. The present results not only suggest the usefulness of the RECCI but also provide the guideline for the further improvement of the RECCI-based KE functional.

  2. Dual quantum electrodynamics: Dyon-dyon and charge-monopole scattering in a high-energy approximation

    SciTech Connect

    Gamberg, Leonard; Milton, Kimball A.

    2000-04-01

    We develop the quantum field theory of electron-point magnetic monopole interactions and, more generally, dyon-dyon interactions, based on the original string-dependent ''nonlocal'' action of Dirac and Schwinger. We demonstrate that a viable nonperturbative quantum field theoretic formulation can be constructed that results in a string independent cross section for monopole-electron and dyon-dyon scattering. Such calculations can be done only by using nonperturbative approximations such as the eikonal approximation and not by some mutilation of lowest-order perturbation theory. (c) 2000 The American Physical Society.

  3. Calculation of the Energy-Band Structure of the Kronig-Penney Model Using the Nearly-Free and Tightly-Bound-Electron Approximations

    ERIC Educational Resources Information Center

    Wetsel, Grover C., Jr.

    1978-01-01

    Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)

  4. Single differential electron impact ionization cross sections in the binary-encounter-Bethe approximation for the low binding energy regime

    NASA Astrophysics Data System (ADS)

    Guerra, M.; Amaro, P.; Machado, J.; Santos, J. P.

    2015-09-01

    An analytical expression based on the binary-encounter-Bethe model for energy differential cross sections in the low binding energy regime is presented. Both the binary-encounter-Bethe model and its modified counterpart are extended to shells with very low binding energy by removing the constraints in the interference term of the Mott cross section, originally introduced by Kim et al. The influence of the ionic factor is also studied for such targets. All the binary-encounter-Bethe based models presented here are checked against experimental results of low binding energy targets, such as the total ionization cross sections of alkali metals. The energy differential cross sections for H and He, at several incident energies, are also compared to available experimental and theoretical values.

  5. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  6. Random-phase approximation correlation energies from Lanczos chains and an optimal basis set: theory and applications to the benzene dimer.

    PubMed

    Rocca, Dario

    2014-05-14

    A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.

  7. Ensemble v-representable ab initio density-functional calculation of energy and spin in atoms: A test of exchange-correlation approximations

    SciTech Connect

    Kraisler, Eli; Makov, Guy; Kelson, Itzhak

    2010-10-15

    The total energies and the spin states for atoms and their first ions with Z=1-86 are calculated within the the local spin-density approximation (LSDA) and the generalized-gradient approximation (GGA) to the exchange-correlation (xc) energy in density-functional theory. Atoms and ions for which the ground-state density is not pure-state v-representable are treated as ensemble v-representable with fractional occupations of the Kohn-Sham system. A recently developed algorithm which searches over ensemble v-representable densities [E. Kraisler et al., Phys. Rev. A 80, 032115 (2009)] is employed in calculations. It is found that for many atoms, the ionization energies obtained with the GGA are only modestly improved with respect to experimental data, as compared to the LSDA. However, even in those groups of atoms where the improvement is systematic, there remains a non-negligible difference with respect to the experiment. The ab initio electronic configuration in the Kohn-Sham reference system does not always equal the configuration obtained from the spectroscopic term within the independent-electron approximation. It was shown that use of the latter configuration can prevent the energy-minimization process from converging to the global minimum, e.g., in lanthanides. The spin values calculated ab initio fit the experiment for most atoms and are almost unaffected by the choice of the xc functional. Among the systems with incorrectly obtained spin, there exist some cases (e.g., V, Pt) for which the result is found to be stable with respect to small variations in the xc approximation. These findings suggest a necessity for a significant modification of the exchange-correlation functional, probably of a nonlocal nature, to accurately describe such systems.

  8. Composition of primary cosmic rays at energies 10(15) to approximately 10(16) eV

    NASA Technical Reports Server (NTRS)

    Amenomori, M.; Konishi, E.; Hotta, N.; Mizutani, K.; Kasahara, K.; Kobayashi, T.; Mikumo, E.; Sato, K.; Yuda, T.; Mito, I.

    1985-01-01

    The sigma epsilon gamma spectrum in 1 approx. 5 x 1000 TV observed at Mt. Fuji suggests that the flux of primary protons 10 to the 15 approx 10th eV is lower by a factor of 2 approx. 3 than a simple extrapolation from lower energies; the integral proton spectrum tends to be steeper than around to the power V and the spectral index tends to be steeper than Epsilon to the -17th power around 10 to the 14th power eV and the spectral index becomes approx. 2.0 around 10 to the 15th power eV. If the total flux of primary particles has no steepening up to approx 10 to the 15th power eV, than the fraction of primary protons to the total flux should be approx 20% in contrast to approx 45% at lower energies.

  9. Few-particles generation channels in inelastic hadron-nuclear interactions at energy approximately equals 400 GeV

    NASA Technical Reports Server (NTRS)

    Tsomaya, P. V.

    1985-01-01

    The behavior of the few-particles generation channels in interaction of hadrons with nuclei of CH2, Al, Cu and Pb at mean energy 400 GeV was investigated. The values of coherent production cross-sections beta coh at the investigated nuclei are given. A dependence of coherent and noncoherent events is investigated. The results are compared with the simulations on additive quark model (AQM).

  10. Neutrino and antineutrino CCQE scattering in the SuperScaling Approximation from MiniBooNE to NOMAD energies

    NASA Astrophysics Data System (ADS)

    Megias, G. D.; Amaro, J. E.; Barbaro, M. B.; Caballero, J. A.; Donnelly, T. W.

    2013-08-01

    We compare the predictions of the SuperScaling model for charged-current quasielastic muonic neutrino and antineutrino scattering from 12C with experimental data spanning an energy range up to 100 GeV. We discuss the sensitivity of the results to different parametrizations of the nucleon vector and axial-vector form factors. Finally, we show the differences between electron and muon (anti)neutrino cross sections relevant for the νSTORM facility.

  11. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation

    NASA Astrophysics Data System (ADS)

    Mohammadpour, Mozhdeh; Jamshidi, Zahra

    2016-05-01

    The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.

  12. ENERGY CONSERVATION AND GRAVITY WAVES IN SOUND-PROOF TREATMENTS OF STELLAR INTERIORS. PART I. ANELASTIC APPROXIMATIONS

    SciTech Connect

    Brown, Benjamin P.; Zweibel, Ellen G.; Vasil, Geoffrey M.

    2012-09-10

    Typical flows in stellar interiors are much slower than the speed of sound. To follow the slow evolution of subsonic motions, various sound-proof equations are in wide use, particularly in stellar astrophysical fluid dynamics. These low-Mach number equations include the anelastic equations. Generally, these equations are valid in nearly adiabatically stratified regions like stellar convection zones, but may not be valid in the sub-adiabatic, stably stratified stellar radiative interiors. Understanding the coupling between the convection zone and the radiative interior is a problem of crucial interest and may have strong implications for solar and stellar dynamo theories as the interface between the two, called the tachocline in the Sun, plays a crucial role in many solar dynamo theories. Here, we study the properties of gravity waves in stably stratified atmospheres. In particular, we explore how gravity waves are handled in various sound-proof equations. We find that some anelastic treatments fail to conserve energy in stably stratified atmospheres, instead conserving pseudo-energies that depend on the stratification, and we demonstrate this numerically. One anelastic equation set does conserve energy in all atmospheres and we provide recommendations for converting low-Mach number anelastic codes to this set of equations.

  13. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  14. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  15. The Vertical-current Approximation Nonlinear Force-free Field Code—Description, Performance Tests, and Measurements of Magnetic Energies Dissipated in Solar Flares

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2016-06-01

    In this work we provide an updated description of the Vertical-Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code, which is designed to measure the evolution of the potential, non-potential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann, we find agreement in the potential, non-potential, and free energy within a factor of ≲ 1.3, but the Wiegelmann code yields in the average a factor of 2 lower flare energies. The VCA-NLFFF code is found to detect decreases in flare energies in most X, M, and C-class flares. The successful detection of energy decreases during a variety of flares with the VCA-NLFFF code indicates that current-driven twisting and untwisting of the magnetic field is an adequate model to quantify the storage of magnetic energies in active regions and their dissipation during flares. The VCA-NLFFF code is also publicly available in the Solar SoftWare.

  16. Distributed memory parallel implementation of energies and gradients for second-order Møller-Plesset perturbation theory with the resolution-of-the-identity approximation.

    PubMed

    Hättig, Christof; Hellweg, Arnim; Köhn, Andreas

    2006-03-14

    We present a parallel implementation of second-order Møller-Plesset perturbation theory with the resolution-of-the-identity approximation (RI-MP2). The implementation is based on a recent improved sequential implementation of RI-MP2 within the Turbomole program package and employs the message passing interface (MPI) standard for communication between distributed memory nodes. The parallel implementation extends the applicability of canonical MP2 to considerably larger systems. Examples are presented for full geometry optimizations with up to 60 atoms and 3300 basis functions and MP2 energy calculations with more than 200 atoms and 7000 basis functions.

  17. Estimation of the Breakup Cross-Sections in 6He + 12C Reaction Within High-Energy Approximation and Microscopic Optical Potential

    NASA Astrophysics Data System (ADS)

    Lukyanov, V. K.; Zemlyanaya, E. V.; Lukyanov, K. V.

    The breakup cross-sections in the reaction 6He + 12C are calculated at about 40 MeV/nucleon using the high-energy approximation (HEA) and with the help of microscopic optical potentials (OP) of interaction with the target nucleus 12C of the projectile nucleus fragments 4He and 2n. Considering the di-neutron h = 2n as a single particle the relative motion hα wave function is estimated so that to explain both the separation energy of h in 6He and the rms radius of the latter. The stripping and absorbtion total cross-sections are calculated and their sum is compared with the total reaction cross-section obtained within a double-folding microscopic OP for the 6He + 12C scattering. It is concluded that the breakup cross-sections contribute to about 50% of the total reaction cross-section.

  18. Modified feed-forward neural network structures and combined-function-derivative approximations incorporating exchange symmetry for potential energy surface fitting.

    PubMed

    Nguyen, Hieu T T; Le, Hung M

    2012-05-10

    The classical interchange (permutation) of atoms of similar identity does not have an effect on the overall potential energy. In this study, we present feed-forward neural network structures that provide permutation symmetry to the potential energy surfaces of molecules. The new feed-forward neural network structures are employed to fit the potential energy surfaces for two illustrative molecules, which are H(2)O and ClOOCl. Modifications are made to describe the symmetric interchange (permutation) of atoms of similar identity (or mathematically, the permutation of symmetric input parameters). The combined-function-derivative approximation algorithm (J. Chem. Phys. 2009, 130, 134101) is also implemented to fit the neural-network potential energy surfaces accurately. The combination of our symmetric neural networks and the function-derivative fitting effectively produces PES fits using fewer numbers of training data points. For H(2)O, only 282 configurations are employed as the training set; the testing root-mean-squared and mean-absolute energy errors are respectively reported as 0.0103 eV (0.236 kcal/mol) and 0.0078 eV (0.179 kcal/mol). In the ClOOCl case, 1693 configurations are required to construct the training set; the root-mean-squared and mean-absolute energy errors for the ClOOCl testing set are 0.0409 eV (0.943 kcal/mol) and 0.0269 eV (0.620 kcal/mol), respectively. Overall, we find good agreements between ab initio and NN prediction in term of energy and gradient errors, and conclude that the new feed-forward neural-network models advantageously describe the molecules with excellent accuracy.

  19. On the piecewise convex or concave nature of ground state energy as a function of fractional number of electrons for approximate density functionals.

    PubMed

    Li, Chen; Yang, Weitao

    2017-02-21

    We provide a rigorous proof that the Hartree Fock energy, as a function of the fractional electron number, E(N), is piecewise concave. Moreover, for semi-local density functionals, we show that the piecewise convexity of the E(N) curve, as stated in the literature, is not generally true for all fractions. By an analysis based on exchange-only local density approximation and careful examination of the E(N) curve, we find for some systems, there exists a very small concave region, corresponding to adding a small fraction of electrons to the integer system, while the remaining E(N) curve is convex. Several numerical examples are provided as verification. Although the E(N) curve is not convex everywhere in these systems, the previous conclusions on the consequence of the delocalization error in the commonly used density functional approximations, in particular, the underestimation of ionization potential, and the overestimation of electron affinity, and other related issues, remain unchanged. This suggests that instead of using the term convexity, a modified and more rigorous description for the delocalization error is that the E(N) curve lies below the straight line segment across the neighboring integer points for these approximate functionals.

  20. Accurate and efficient representation of intra­molecular energy in ab initio generation of crystal structures. I. Adaptive local approximate models

    PubMed Central

    Sugden, Isaac; Adjiman, Claire S.; Pantelides, Constantinos C.

    2016-01-01

    The global search stage of crystal structure prediction (CSP) methods requires a fine balance between accuracy and computational cost, particularly for the study of large flexible molecules. A major improvement in the accuracy and cost of the intramolecular energy function used in the CrystalPredictor II [Habgood et al. (2015 ▸). J. Chem. Theory Comput. 11, 1957–1969] program is presented, where the most efficient use of computational effort is ensured via the use of adaptive local approximate model (LAM) placement. The entire search space of the relevant molecule’s conformations is initially evaluated using a coarse, low accuracy grid. Additional LAM points are then placed at appropriate points determined via an automated process, aiming to minimize the computational effort expended in high-energy regions whilst maximizing the accuracy in low-energy regions. As the size, complexity and flexibility of molecules increase, the reduction in computational cost becomes marked. This improvement is illustrated with energy calculations for benzoic acid and the ROY molecule, and a CSP study of molecule (XXVI) from the sixth blind test [Reilly et al. (2016 ▸). Acta Cryst. B72, 439–459], which is challenging due to its size and flexibility. Its known experimental form is successfully predicted as the global minimum. The computational cost of the study is tractable without the need to make unphysical simplifying assumptions. PMID:27910837

  1. Generalized Gradient Approximations of the Noninteracting Kinetic Energy from the Semiclassical Atom Theory: Rationalization of the Accuracy of the Frozen Density Embedding Theory for Nonbonded Interactions.

    PubMed

    Laricchia, S; Fabiano, E; Constantin, L A; Della Sala, F

    2011-08-09

    We present a new class of noninteracting kinetic energy (KE) functionals, derived from the semiclassical-atom theory. These functionals are constructed using the link between exchange and kinetic energies and employ a generalized gradient approximation (GGA) for the enhancement factor, namely, the Perdew-Burke-Ernzerhof (PBE) one. Two of them, named APBEK and revAPBEK, recover in the slowly varying density limit the modified second-order gradient (MGE2) expansion of the KE, which is valid for a neutral atom with a large number of electrons. APBEK contains no empirical parameters, while revAPBEK has one empirical parameter derived from exchange energies, which leads to a higher degree of nonlocality. The other two functionals, APBEKint and revAPBEKint, modify the APBEK and revAPBEK enhancement factors, respectively, to recover the second-order gradient expansion (GE2) of the homogeneous electron gas. We first benchmarked the total KE of atoms/ions and jellium spheres/surfaces: we found that functionals based on the MGE2 are as accurate as the current state-of-the-art KE functionals, containing several empirical parameters. Then, we verified the accuracy of these new functionals in the context of the frozen density embedding (FDE) theory. We benchmarked 20 systems with nonbonded interactions, and we considered embedding errors in the energy and density. We found that all of the PBE-like functionals give accurate and similar embedded densities, but the revAPBEK and revAPBEKint functionals have a significant superior accuracy for the embedded energy, outperforming the current state-of-the-art GGA approaches. While the revAPBEK functional is more accurate than revAPBEKint, APBEKint is better than APBEK. To rationalize this performance, we introduce the reduced-gradient decomposition of the nonadditive kinetic energy, and we discuss how systems with different interactions can be described with the same functional form.

  2. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

    PubMed

    Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

    2013-05-30

    This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.

  3. Rough Set Approximations in Formal Concept Analysis

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake

    Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.

  4. Laplacian-Level Kinetic Energy Approximations Based on the Fourth-Order Gradient Expansion: Global Assessment and Application to the Subsystem Formulation of Density Functional Theory.

    PubMed

    Laricchia, Savio; Constantin, Lucian A; Fabiano, Eduardo; Della Sala, Fabio

    2014-01-14

    We tested Laplacian-level meta-generalized gradient approximation (meta-GGA) noninteracting kinetic energy functionals based on the fourth-order gradient expansion (GE4). We considered several well-known Laplacian-level meta-GGAs from the literature (bare GE4, modified GE4, and the MGGA functional of Perdew and Constantin (Phys. Rev. B 2007,75, 155109)), as well as two newly designed Laplacian-level kinetic energy functionals (L0.4 and L0.6). First, a general assessment of the different functionals is performed to test them for model systems (one-electron densities, Hooke's atom, and different jellium systems) and atomic and molecular kinetic energies as well as for their behavior with respect to density-scaling transformations. Finally, we assessed, for the first time, the performance of the different functionals for subsystem density functional theory (DFT) calculations on noncovalently interacting systems. We found that the different Laplacian-level meta-GGA kinetic functionals may improve the description of different properties of electronic systems, but no clear overall advantage is found over the best GGA functionals. Concerning the subsystem DFT calculations, the here-proposed L0.4 and L0.6 kinetic energy functionals are competitive with state-of-the-art GGAs, whereas all other Laplacian-level functionals fail badly. The performance of the Laplacian-level functionals is rationalized thanks to a two-dimensional reduced-gradient and reduced-Laplacian decomposition of the nonadditive kinetic energy density.

  5. Nonlinear amplitude approximation for bilinear systems

    NASA Astrophysics Data System (ADS)

    Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.

    2014-06-01

    An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.

  6. Energy and macronutrient content of familiar beverages interact with pre-meal intervals to determine later food intake, appetite and glycemic response in young adults.

    PubMed

    Panahi, Shirin; Luhovyy, Bohdan L; Liu, Ting Ting; Akhavan, Tina; El Khoury, Dalia; Goff, H Douglas; Anderson, G Harvey

    2013-01-01

    The objective was to compare the effects of pre-meal consumption of familiar beverages on appetite, food intake, and glycemic response in healthy young adults. Two short-term experiments compared the effect of consumption at 30 (experiment 1) or 120 min (experiment 2) before a pizza meal of isovolumetric amounts (500 mL) of water (0 kcal), soy beverage (200 kcal), 2% milk (260 kcal), 1% chocolate milk (340 kcal), orange juice (229 kcal) and cow's milk-based infant formula (368 kcal) on food intake and subjective appetite and blood glucose before and after a meal. Pre-meal ingestion of chocolate milk and infant formula reduced food intake compared to water at 30 min, however, beverage type did not affect food intake at 2h. Pre-meal blood glucose was higher after chocolate milk than other caloric beverages from 0 to 30 min (experiment 1), and after chocolate milk and orange juice from 0 to 120 min (experiment 2). Only milk reduced post-meal blood glucose in both experiments, suggesting that its effects were independent of meal-time energy intake. Combined pre- and post-meal blood glucose was lower after milk compared to chocolate milk and orange juice, but did not differ from other beverages. Thus, beverage calorie content and inter-meal intervals are primary determinants of food intake in the short-term, but macronutrient composition, especially protein content and composition, may play the greater role in glycemic control.

  7. Interaction between respiratory and RR interval oscillations at low frequencies.

    PubMed

    Aguirre, A; Wodicka, G R; Maayan, C; Shannon, D C

    1990-03-01

    Oscillations in RR interval between 0.02 and 1.00 cycles per second (Hz) have been related to the action of the autonomic nervous system. Respiration has been shown to influence RR interval at normal breathing frequencies between approximately 0.16 and 0.5 Hz in children and adults--a phenomenon known as respiratory sinus arrhythmia. In this study we investigated the effect of respiration on RR interval in a lower frequency range between 0.02 and 0.12 Hz. Low frequency oscillations in respiration were induced in healthy sleeping adult subjects via the administration of a bolus of CO2 during inhalation. Power spectra of RR interval and respiration were obtained before and after the CO2 pulse, and the frequency content in the low frequency range was quantitatively compared. An increase in the spectral energy in both respiration and RR interval was observed for the group. However, this increase was accounted for by six of 29 epochs. We conclude that respiration (tidal volume) can influence RR interval at frequencies below those usually associated with respiratory sinus arrhythmia. This influence may be mediated through a sympathetic reflex. This result is applicable to the measurement and interpretation of heart rate variability and to autonomic influences of low frequency fluctuations in RR interval.

  8. Practical approximation of the non-adiabatic coupling terms for same-symmetry interstate crossings by using adiabatic potential energies only

    NASA Astrophysics Data System (ADS)

    Baeck, Kyoung Koo; An, Heesun

    2017-02-01

    A very simple equation, Fij A p p=[(∂2(Via-Vja ) /∂Q2 ) /(Via-Vja ) ] 1 /2/2 , giving a reliable magnitude of non-adiabatic coupling terms (NACTs, Fij's) based on adiabatic potential energies only (Via and Vja) was discovered, and its reliability was tested for several prototypes of same-symmetry interstate crossings in LiF, C2, NH3Cl, and C6H5SH molecules. Our theoretical derivation starts from the analysis of the relationship between the Lorentzian dependence of NACTs along a diabatization coordinate and the well-established linear vibronic coupling scheme. This analysis results in a very simple equation, α =2 κ /Δc , enabling the evaluation of the Lorentz function α parameter in terms of the coupling constant κ and the energy gap Δc (Δc=|Via-Vja| Q c ) between adiabatic states at the crossing point QC. Subsequently, it was shown that QC corresponds to the point where Fij A p p exhibit maximum values if we set the coupling parameter as κ =[(Via-Vja ) ṡ(∂2(Via-Vja ) /∂Q2 ) ] Qc1 /2 /2 . Finally, we conjectured that this relation could give reasonable values of NACTs not only at the crossing point but also at other geometries near QC. In this final approximation, the pre-defined crossing point QC is not required. The results of our test demonstrate that the approximation works much better than initially expected. The present new method does not depend on the selection of an ab initio method for adiabatic electronic states but is currently limited to local non-adiabatic regions where only two electronic states are dominantly involved within a nuclear degree of freedom.

  9. On the incorporation of the geometric phase in general single potential energy surface dynamics: A removable approximation to ab initio data.

    PubMed

    Malbon, Christopher L; Zhu, Xiaolei; Guo, Hua; Yarkony, David R

    2016-12-21

    For two electronic states coupled by conical intersections, the line integral of the derivative coupling can be used to construct a complex-valued multiplicative phase factor that makes the real-valued adiabatic electronic wave function single-valued, provided that the curl of the derivative coupling is zero. Unfortunately for ab initio determined wave functions, the curl is never rigorously zero. However, when the wave functions are determined from a coupled two diabatic state Hamiltonian H(d) (fit to ab initio data), the resulting derivative couplings are by construction curl free, except at points of conical intersection. In this work we focus on a recently introduced diabatization scheme that produces the H(d) by fitting ab initio determined energies, energy gradients, and derivative couplings to the corresponding H(d) determined quantities in a least squares sense, producing a removable approximation to the ab initio determined derivative coupling. This approach and related numerical issues associated with the nonremovable ab initio derivative couplings are illustrated using a full 33-dimensional representation of phenol photodissociation. The use of this approach to provide a general framework for treating the molecular Aharonov Bohm effect is demonstrated.

  10. On the incorporation of the geometric phase in general single potential energy surface dynamics: A removable approximation to ab initio data

    NASA Astrophysics Data System (ADS)

    Malbon, Christopher L.; Zhu, Xiaolei; Guo, Hua; Yarkony, David R.

    2016-12-01

    For two electronic states coupled by conical intersections, the line integral of the derivative coupling can be used to construct a complex-valued multiplicative phase factor that makes the real-valued adiabatic electronic wave function single-valued, provided that the curl of the derivative coupling is zero. Unfortunately for ab initio determined wave functions, the curl is never rigorously zero. However, when the wave functions are determined from a coupled two diabatic state Hamiltonian Hd (fit to ab initio data), the resulting derivative couplings are by construction curl free, except at points of conical intersection. In this work we focus on a recently introduced diabatization scheme that produces the Hd by fitting ab initio determined energies, energy gradients, and derivative couplings to the corresponding Hd determined quantities in a least squares sense, producing a removable approximation to the ab initio determined derivative coupling. This approach and related numerical issues associated with the nonremovable ab initio derivative couplings are illustrated using a full 33-dimensional representation of phenol photodissociation. The use of this approach to provide a general framework for treating the molecular Aharonov Bohm effect is demonstrated.

  11. Analytical approximations for X-ray cross sections 3

    NASA Astrophysics Data System (ADS)

    Biggs, Frank; Lighthill, Ruth

    1988-08-01

    This report updates our previous work that provided analytical approximations to cross sections for both photoelectric absorption of photons by atoms and incoherent scattering of photons by atoms. This representation is convenient for use in programmable calculators and in computer programs to evaluate these cross sections numerically. The results apply to atoms of atomic numbers between 1 and 100 and for photon energies greater than or equal to 10 eV. The photoelectric cross sections are again approximated by four-term polynomials in reciprocal powers of the photon energy. There are now more fitting intervals, however, than were used previously. The incoherent-scattering cross sections are based on the Klein-Nishina relation, but use simpler approximate equations for efficient computer evaluation. We describe the averaging scheme for applying these atomic results to any composite material. The fitting coefficients are included in tables, and the cross sections are shown graphically.

  12. Low-energy dipole excitations in neon isotopes and N=16 isotones within the quasiparticle random-phase approximation and the Gogny force

    SciTech Connect

    Martini, M.; Peru, S.; Dupuis, M.

    2011-03-15

    Low-energy dipole excitations in neon isotopes and N=16 isotones are calculated with a fully consistent axially-symmetric-deformed quasiparticle random phase approximation (QRPA) approach based on Hartree-Fock-Bogolyubov (HFB) states. The same Gogny D1S effective force has been used both in HFB and QRPA calculations. The microscopical structure of these low-lying resonances, as well as the behavior of proton and neutron transition densities, are investigated in order to determine the isoscalar or isovector nature of the excitations. It is found that the N=16 isotones {sup 24}O, {sup 26}Ne, {sup 28}Mg, and {sup 30}Si are characterized by a similar behavior. The occupation of the 2s{sub 1/2} neutron orbit turns out to be crucial, leading to nontrivial transition densities and to small but finite collectivity. Some low-lying dipole excitations of {sup 28}Ne and {sup 30}Ne, characterized by transitions involving the {nu}1d{sub 3/2} state, present a more collective behavior and isoscalar transition densities. A collective proton low-lying excitation is identified in the {sup 18}Ne nucleus.

  13. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

    PubMed Central

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-01-01

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

  14. Laplacian-dependent models of the kinetic energy density: Applications in subsystem density functional theory with meta-generalized gradient approximation functionals.

    PubMed

    Śmiga, Szymon; Fabiano, Eduardo; Constantin, Lucian A; Della Sala, Fabio

    2017-02-14

    The development of semilocal models for the kinetic energy density (KED) is an important topic in density functional theory (DFT). This is especially true for subsystem DFT, where these models are necessary to construct the required non-additive embedding contributions. In particular, these models can also be efficiently employed to replace the exact KED in meta-Generalized Gradient Approximation (meta-GGA) exchange-correlation functionals allowing to extend the subsystem DFT applicability to the meta-GGA level of theory. Here, we present a two-dimensional scan of semilocal KED models as linear functionals of the reduced gradient and of the reduced Laplacian, for atoms and weakly bound molecular systems. We find that several models can perform well but in any case the Laplacian contribution is extremely important to model the local features of the KED. Indeed a simple model constructed as the sum of Thomas-Fermi KED and 1/6 of the Laplacian of the density yields the best accuracy for atoms and weakly bound molecular systems. These KED models are tested within subsystem DFT with various meta-GGA exchange-correlation functionals for non-bonded systems, showing a good accuracy of the method.

  15. Laplacian-dependent models of the kinetic energy density: Applications in subsystem density functional theory with meta-generalized gradient approximation functionals

    NASA Astrophysics Data System (ADS)

    Śmiga, Szymon; Fabiano, Eduardo; Constantin, Lucian A.; Della Sala, Fabio

    2017-02-01

    The development of semilocal models for the kinetic energy density (KED) is an important topic in density functional theory (DFT). This is especially true for subsystem DFT, where these models are necessary to construct the required non-additive embedding contributions. In particular, these models can also be efficiently employed to replace the exact KED in meta-Generalized Gradient Approximation (meta-GGA) exchange-correlation functionals allowing to extend the subsystem DFT applicability to the meta-GGA level of theory. Here, we present a two-dimensional scan of semilocal KED models as linear functionals of the reduced gradient and of the reduced Laplacian, for atoms and weakly bound molecular systems. We find that several models can perform well but in any case the Laplacian contribution is extremely important to model the local features of the KED. Indeed a simple model constructed as the sum of Thomas-Fermi KED and 1/6 of the Laplacian of the density yields the best accuracy for atoms and weakly bound molecular systems. These KED models are tested within subsystem DFT with various meta-GGA exchange-correlation functionals for non-bonded systems, showing a good accuracy of the method.

  16. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems.

    PubMed

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-08-05

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods.

  17. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

    NASA Astrophysics Data System (ADS)

    Engel, D.; Klews, M.; Wunner, G.

    2009-02-01

    We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

  18. Biomathematics and Interval Analysis: A Prosperous Marriage

    NASA Astrophysics Data System (ADS)

    Markov, S. M.

    2010-11-01

    In this survey paper we focus our attention on dynamical bio-systems involving uncertainties and the use of interval methods for the modelling study of such systems. The kind of envisioned uncertain systems are those described by a dynamical model with parameters bounded in intervals. We point out to a fruitful symbiosis between dynamical modelling in biology and computational methods of interval analysis. Both fields are presently in the stage of rapid development and can benefit from each other. We point out on recent studies in the field of interval arithmetic from a new perspective—the midpoint-radius arithmetic which explores the properties of error bounds and approximate numbers. The midpoint-radius approach provides a bridge between interval methods and the "uncertain but bounded" approach used for model estimation and identification. We briefly discuss certain recently obtained algebraic properties of errors and approximate numbers.

  19. Phenomenological applications of rational approximants

    NASA Astrophysics Data System (ADS)

    Gonzàlez-Solís, Sergi; Masjuan, Pere

    2016-08-01

    We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

  20. Semiphenomenological approximation of the sums of experimental radiative strength functions for dipole gamma transitions of energy E γ below the neutron binding energy B n for mass numbers in the range 40 ≤ A ≤ 200

    NASA Astrophysics Data System (ADS)

    Sukhovoj, A. M.; Furman, W. I.; Khitrov, V. A.

    2008-06-01

    The sums of radiative strength functions for primary dipole gamma transitions, k( E1) + k( M1), are approximated to a high precision by a superposition of two functional dependences in the energy range 0.5 < E 1 < B n - 0.5 MeV for the 40K, 60Co, 71,74Ge, 80Br, 114Cd, 118Sn, 124,125Te, 128I, 137,138,139Ba, 140La, 150Sm, 156,158Gd, 160Tb, 163,164,165Dy, 166Ho, 168Er, 170Tm, 174Yb, 176,177Lu, 181Hf, 182Ta, 183,184,185,187W, 188,190,191,193Os, 192Ir, 196Pt, 198Au, and 200Hg nuclei. It is shown that, in any nuclei, radiative strength functions are a dynamical quantity and that the values of k( E1) + k( M1) for specific energies of gamma transitions and specific nuclei are determined by the structure of decaying and excited levels, at least up to the neutron binding energy B n .

  1. Semiphenomenological approximation of the sums of experimental radiative strength functions for dipole gamma transitions of energy E{sub {gamma}}below the neutron binding energy B{sub n} for mass numbers in the range 40 {<=} A {<=} 200

    SciTech Connect

    Sukhovoj, A. M. Furman, W. I. Khitrov, V. A.

    2008-06-15

    The sums of radiative strength functions for primary dipole gamma transitions, k(E1) + k(M1), are approximated to a high precision by a superposition of two functional dependences in the energy range 0.5 < E{sub 1} < B{sub n} - 0.5 MeV for the {sup 40}K, {sup 60}Co, {sup 71,74}Ge, {sup 80}Br, {sup 114}Cd, {sup 118}Sn, {sup 124,125}Te, {sup 128}I, {sup 137,138,139}Ba, {sup 140}La, {sup 150}Sm, {sup 156,158}Gd, {sup 160}Tb, {sup 163,164,165}Dy, {sup 166}Ho, {sup 168}Er, {sup 170}Tm, {sup 174}Yb, {sup 176,177}Lu, {sup 181}Hf, {sup 182}Ta, {sup 183,184,185,187}W, {sup 188,190,191,193}Os, {sup 192}Ir, {sup 196}Pt, {sup 198}Au, and {sup 200}Hg nuclei. It is shown that, in any nuclei, radiative strength functions are a dynamical quantity and that the values of k(E1) + k(M1) for specific energies of gamma transitions and specific nuclei are determined by the structure of decaying and excited levels, at least up to the neutron binding energy B{sub n}.

  2. Musical intervals in speech.

    PubMed

    Ross, Deborah; Choi, Jonathan; Purves, Dale

    2007-06-05

    Throughout history and across cultures, humans have created music using pitch intervals that divide octaves into the 12 tones of the chromatic scale. Why these specific intervals in music are preferred, however, is not known. In the present study, we analyzed a database of individually spoken English vowel phones to examine the hypothesis that musical intervals arise from the relationships of the formants in speech spectra that determine the perceptions of distinct vowels. Expressed as ratios, the frequency relationships of the first two formants in vowel phones represent all 12 intervals of the chromatic scale. Were the formants to fall outside the ranges found in the human voice, their relationships would generate either a less complete or a more dilute representation of these specific intervals. These results imply that human preference for the intervals of the chromatic scale arises from experience with the way speech formants modulate laryngeal harmonics to create different phonemes.

  3. Interpolation and Approximation Theory.

    ERIC Educational Resources Information Center

    Kaijser, Sten

    1991-01-01

    Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)

  4. Programming with Intervals

    NASA Astrophysics Data System (ADS)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  5. Interval hypoxic training.

    PubMed

    Bernardi, L

    2001-01-01

    Interval hypoxic training (IHT) is a technique developed in the former Soviet Union, that consists of repeated exposures to 5-7 minutes of steady or progressive hypoxia, interrupted by equal periods of recovery. It has been proposed for training in sports, to acclimatize to high altitude, and to treat a variety of clinical conditions, spanning from coronary heart disease to Cesarean delivery. Some of these results may originate by the different effects of continuous vs. intermittent hypoxia (IH), which can be obtained by manipulating the repetition rate, the duration and the intensity of the hypoxic stimulus. The present article will attempt to examine some of the effects of IH, and, whenever possible, compare them to those of typical IHT. IH can modify oxygen transport and energy utilization, alter respiratory and blood pressure control mechanisms, induce permanent modifications in the cardiovascular system. IHT increases the hypoxic ventilatory response, increase red blood cell count and increase aerobic capacity. Some of these effects might be potentially beneficial in specific physiologic or pathologic conditions. At this stage, this technique appears interesting for its possible applications, but still largely to be explored for its mechanisms, potentials and limitations.

  6. Interval estimations in metrology

    NASA Astrophysics Data System (ADS)

    Mana, G.; Palmisano, C.

    2014-06-01

    This paper investigates interval estimation for a measurand that is known to be positive. Both the Neyman and Bayesian procedures are considered and the difference between the two, not always perceived, is discussed in detail. A solution is proposed to a paradox originated by the frequentist assessment of the long-run success rate of Bayesian intervals.

  7. Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ye, Ming; Hill, Mary C.

    2012-09-01

    Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for

  8. Direct interval volume visualization.

    PubMed

    Ament, Marco; Weiskopf, Daniel; Carr, Hamish

    2010-01-01

    We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.

  9. Kramer-Pesch approximation for analyzing field-angle-resolved measurements made in unconventional superconductors: a calculation of the zero-energy density of states.

    PubMed

    Nagai, Yuki; Hayashi, Nobuhiko

    2008-08-29

    By measuring the angular-oscillations behavior of the heat capacity with respect to the applied field direction, one can detect the details of the gap structure. We introduce the Kramer-Pesch approximation as a new method to analyze the field-angle-dependent experiments, which improves the previous Doppler-shift technique. We show that the Fermi-surface anisotropy is an indispensable factor for identifying the superconducting gap symmetry.

  10. Kramer-Pesch Approximation for Analyzing Field-Angle-Resolved Measurements Made in Unconventional Superconductors: A Calculation of the Zero-Energy Density of States

    NASA Astrophysics Data System (ADS)

    Nagai, Yuki; Hayashi, Nobuhiko

    2008-08-01

    By measuring the angular-oscillations behavior of the heat capacity with respect to the applied field direction, one can detect the details of the gap structure. We introduce the Kramer-Pesch approximation as a new method to analyze the field-angle-dependent experiments, which improves the previous Doppler-shift technique. We show that the Fermi-surface anisotropy is an indispensable factor for identifying the superconducting gap symmetry.

  11. Application of the Mean Spherical Approximation to Describe the Gibbs Solvation Energies of Monovalent Monoatomic Ions in Non-Aqueous Solvents

    DTIC Science & Technology

    1991-11-09

    through hydrogen bonding . A plot of AG0Oth against AGO is shown for the Cl- ion data in Figure 4. Agreement tr, htrex between the theoretical estimate and...SOLVENTS by L. Blum* and W.R. Fawcett* Prepared for Publication in the Journal of Physical Chemistry *Department of Physics, POB AT, Faculty of Natural ...the excess ionic properties depend on a single scaling, Debye-like parameter is still retained by this approximation. The equations for the most

  12. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  13. Dependence of the specific energy of the β/α interface in the VT6 titanium alloy on the heating temperature in the interval 600-975°C

    NASA Astrophysics Data System (ADS)

    Murzinova, M. A.; Zherebtsov, S. V.; Salishchev, G. A.

    2016-04-01

    The specific energy of interphase boundaries is an important characteristic of multiphase alloys, because it determines in many respects their microstructural stability and properties during processing and exploitation. We analyze variation of the specific energy of the β/α interface in the VT6 titanium alloy at temperatures from 600 to 975°C. Analysis is based on the model of a ledge interphase boundary and the method for computation of its energy developed by van der Merwe and Shiflet [33, 34]. Calculations use the available results of measurements of the lattice parameters of phases in the indicated temperature interval and their chemical composition. In addition, we take into account the experimental data and the results of simulation of the effect of temperature and phase composition on the elastic moduli of the α and β phases in titanium alloys. It is shown that when the temperature decreases from 975 to 600°C, the specific energy of the β/α interface increases from 0.15 to 0.24 J/m2. The main contribution to the interfacial energy (about 85%) comes from edge dislocations accommodating the misfit in direction [0001]α || [110]β. The energy associated with the accommodation of the misfit in directions {[ {bar 2110} ]_α }| {{{[ {1bar 11} ]}_β }} . and {[ {0bar 110} ]_α }| {{{[ {bar 112} ]}_β }} . due to the formation of "ledges" and tilt misfit dislocations is low and increases slightly upon cooling.

  14. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  15. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  16. Analysis of experimental data on doublet neutron-deuteron scattering at energies below the deuteron-breakup threshold on the basis of the pole approximation of the effective-range function

    SciTech Connect

    Babenko, V. A.; Petrov, N. M.

    2008-01-15

    On the basis of the Bargmann representation of the S matrix, the pole approximation is obtained for the effective-range function k cot {delta}. This approximation is optimal for describing the neutron-deuteron system in the doublet spin state. The values of r{sub 0} = 412.469 fm and v{sub 2} = -35 495.62 fm{sup 3} for the doublet low-energy parameters of neutron-deuteron scattering and the value of D = 172.678 fm{sup 2} for the respective pole parameter are deduced by using experimental results for the triton binding energy E{sub T}, the doublet neutron-deuteron scattering length a{sub 2}, and van Oers-Seagrave phase shifts at energies below the deuteron-breakup threshold. With these parameters, the pole approximation of the effective-range function provides a highly precise description (the relative error does not exceed 1%) of the doublet phase shift for neutron-deuteron scattering at energies below the deuteron-breakup threshold. Physical properties of the triton in the ground (T) and virtual (v) states are calculated. The results are B{sub v} = 0.608 MeV for the virtuallevel position and C{sub T}{sup 2} = 2.866 and C{sub v}{sup 2} = 0.0586 for the dimensionless asymptotic normalization constants. It is shown that, in the Whiting-Fuda approximation, the values of physical quantities characterizing the triton virtual state are determined to a high precision by one parameter, the doublet neutron-deuteron scattering length a{sub 2}. The effective triton radii in the ground ({rho}{sub T} = 1.711 fm) and virtual ({rho}{sub v} = 74.184 fm) states are calculated for the first time.

  17. Approximation of Laws

    NASA Astrophysics Data System (ADS)

    Niiniluoto, Ilkka

    2014-03-01

    Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

  18. Exclusive experiment on nuclei with backward emitted particles by electron-nucleus collision in {approximately} 10 GeV energy range

    SciTech Connect

    Saito, T.; Takagi, F.

    1994-04-01

    Since the evidence of strong cross section in proton-nucleus backward scattering was presented in the early of 1970 years, this phenomena have been interested from the point of view to be related to information on the short range correlation between nucleons or on high momentum components of the wave function of the nucleus. In the analysis of the first experiment on protons from the carbon target under bombardment by 1.5-5.7 GeV protons, indications are found of an effect analogous to scaling in high-energy interactions of elementary particles with protons. Moreover it is found that the function f(p{sup 2})/{sigma}{sub tot}, which describes the spectra of the protons and deuterons emitted backward from nuclei in the laboratory system, does not depend on the energy and the type of the incident particle or on the atomic number of the target nucleus. In the following experiments the spectra of the protons emitted from the nuclei C, Al, Ti, Cu, Cd and Pb were measured in the inclusive reactions with incident particles of negative pions (1.55-6.2 GeV/c) and protons (6.2-9.0 GeV/C). The cross section f is described by f = E/p{sup 2} d{sup 2}{sigma}/dpd{Omega} = C exp ({minus}Bp{sup 2}), where p is the momentum of hadron. The function f depends linearly on the atomic weight A of the target nuclei. The slope parameter B is independent of the target nucleus and of the sort and energy of the bombarding particles. The invariant cross section {rho} = f/{sigma}{sub tot} is also described by exponential A{sub 0} exp ({minus}A{sub 1p}{sup 2}), where p becomes independent of energy at initial particle energies {ge} 1.5 GeV for C nucleus and {ge} 5 GeV for the heaviest of the investigated Pb nuclei.

  19. Analytic Energy Gradients and Spin Multiplicities for Orbital-Optimized Second-Order Perturbation Theory with Density-Fitting Approximation: An Efficient Implementation.

    PubMed

    Bozkaya, Uğur

    2014-10-14

    An efficient implementation of analytic energy gradients and spin multiplicities for the density-fitted orbital-optimized second-order perturbation theory (DF-OMP2) [Bozkaya, U. J. Chem. Theory Comput. 2014, 10, 2371-2378] is presented. The DF-OMP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the cost of single point analytic gradient computations with the orbital-optimized MP2 with the resolution of the identity approach (OO-RI-MP2) [Neese, F.; Schwabe, T.; Kossmann, S.; Schirmer, B.; Grimme, S. J. Chem. Theory Comput. 2009, 5, 3060-3073]. Our results demonstrate that the DF-OMP2 method provides substantially lower computational costs for analytic gradients than OO-RI-MP2. On average, the cost of DF-OMP2 analytic gradients is 9-11 times lower than that of OO-RI-MP2 for systems considered. We also consider aromatic bond dissociation energies, for which MP2 provides poor reaction energies. The DF-OMP2 method exhibits a substantially better performance than MP2, providing a mean absolute error of 2.5 kcal mol(-1), which is more than 9 times lower than that of MP2 (22.6 kcal mol(-1)). Overall, the DF-OMP2 method appears very helpful for electronically challenging chemical systems such as free radicals or other cases where standard MP2 proves unreliable. For such problematic systems, we recommend using DF-OMP2 instead of the canonical MP2 as a more robust method with the same computational scaling.

  20. The calculation of ionization energies by perturbation, configuration interaction and approximate coupled pair techniques and comparisons with green's function methods for Ne, H 2O and N 2

    NASA Astrophysics Data System (ADS)

    Bacskay, George B.

    1980-05-01

    The vertical valence ionization potentials of Ne, H 2O and N 2 have been calculated by Rayleigh-Schrödinger perturbation and configuration interaction methods. The calculations were carried out in the space of a single determinant reference state and its single and double excitations, using both the N and N - 1 electron Hartree-Fock orbitals as hole/particle bases. The perturbation series for the ion state were generally found to converge fairly slowly in the N electron Hartree-Fock (frozen) orbital basis, but considerably faster in the appropriate N - 1 electron RHF (relaxed) orbital basis. In certain cases, however, due to near-degeneracy effects, partial, and even complete, breakdown of the (non-degenerate) perturbation treatment was observed. The effects of higher excitations on the ionization potentials were estimated by the approximate coupled pair techniques CPA' and CPA″ as well as by a Davidson type correction formula. The final, fully converged CPA″ results are generally in good agreement with those from PNO-CEPA and Green's function calculations as well as experiment.

  1. Simple physics-based analytical formulas for the potentials of mean force for the interaction of amino acid side chains in water. 1. Approximate expression for the free energy of hydrophobic association based on a Gaussian-overlap model.

    PubMed

    Makowski, Mariusz; Liwo, Adam; Scheraga, Harold A

    2007-03-22

    A physics-based model is proposed to derive approximate analytical expressions for the cavity component of the free energy of hydrophobic association of spherical and spheroidal solutes in water. The model is based on the difference between the number and context of the water molecules in the hydration sphere of a hydrophobic dimer and of two isolated hydrophobic solutes. It is assumed that the water molecules touching the convex part of the molecular surface of the dimer and those in the hydration spheres of the monomers contribute equally to the free energy of solvation, and those touching the saddle part of the molecular surface of the dimer result in a more pronounced increase in free energy because of their more restricted mobility (entropy loss) and fewer favorable electrostatic interactions with other water molecules. The density of water in the hydration sphere around a single solute particle is approximated by the derivative of a Gaussian centered on the solute molecule with respect to its standard deviation. On the basis of this approximation, the number of water molecules in different parts of the hydration sphere of the dimer is expressed in terms of the first and the second mixed derivatives of the two Gaussians centered on the first and second solute molecules, respectively, with respect to the standard deviations of these Gaussians, and plausible analytical expressions for the cavity component of the hydrophobic-association energy of spherical and spheroidal solutes are introduced. As opposed to earlier hydration-shell models, our expressions reproduce the desolvation maxima in the potentials of mean force of pairs of nonpolar solutes in water, and their advantage over the models based on molecular-surface area is that they have continuous gradients in the coordinates of solute centers.

  2. New approach to identify negative and positive pions with a scintillator range telescope in the 15-90 MeV pion energy interval

    SciTech Connect

    Julien, J.; Bellini, V.; Bolore, M.; Charlot, X.; Girard, J.; Pappalardo, G.S.; Poitou, J.; Roussel, L.

    1984-02-01

    A scintillator range telescope was designed to detect pions in a very intense background of charged particles (ca 5000 ps) and to identify pion charge in the 15-90 MeV range. Such a telescope has a solid angle of 20 msr and allows the simultaneous detection of a wide pion momentum range on the order of 70 MeV/c to 200 MeV/c for both pions plus and pions minus. Several angles can be simultaneously studied with three telescopes. The pion energy resolution of ca 3 MeV is less, however, than the corresponding 0.5 MeV of a magnetic spectrometer. The accuracy of the R ratio depends on the accuracy of the pion plus identification method. This identification is based on the detection of particles generated by the pion plus-to-muon-to-tau decay sequence with a mean life of 26 ns. One method relies on the fast recovery time of the associated electronics by using an appropriate delayed coincidence between poin plus and muon plus signals. The low efficiency of such a method does not permit the determination of the pion minus contribution. In order to improve the charge identification of pions, the authors use a new approach in their experiments, based on the measurement of the charge of the particle pulses within different time gates. This paper presents the principles of this approach. Three gates--a prompt, a normal, and a delayed gate-and their respective charge analyzers are used in the discussion.

  3. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  4. High resolution time interval counter

    DOEpatents

    Condreva, K.J.

    1994-07-26

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

  5. High resolution time interval counter

    DOEpatents

    Condreva, Kenneth J.

    1994-01-01

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

  6. Green Ampt approximations

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.

    2005-10-01

    The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.

  7. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation.

    PubMed

    Isegawa, Miho; Truhlar, Donald G

    2013-04-07

    Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

  8. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: Linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation

    NASA Astrophysics Data System (ADS)

    Isegawa, Miho; Truhlar, Donald G.

    2013-04-01

    Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

  9. Intrinsic Nilpotent Approximation.

    DTIC Science & Technology

    1985-06-01

    RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It

  10. Anomalous diffraction approximation limits

    NASA Astrophysics Data System (ADS)

    Videen, Gorden; Chýlek, Petr

    It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.

  11. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  12. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  13. Metabolic response of different high-intensity aerobic interval exercise protocols.

    PubMed

    Gosselin, Luc E; Kozlowski, Karl F; DeVinney-Boymel, Lee; Hambridge, Caitlin

    2012-10-01

    Although high-intensity sprint interval training (SIT) employing the Wingate protocol results in significant physiological adaptations, it is conducted at supramaximal intensity and is potentially unsafe for sedentary middle-aged adults. We therefore evaluated the metabolic and cardiovascular response in healthy young individuals performing 4 high-intensity (~90% VO2max) aerobic interval training (HIT) protocols with similar total work output but different work-to-rest ratio. Eight young physically active subjects participated in 5 different bouts of exercise over a 3-week period. Protocol 1 consisted of 20-minute continuous exercise at approximately 70% of VO2max, whereas protocols 2-5 were interval based with a work-active rest duration (in seconds) of 30/30, 60/30, 90/30, and 60/60, respectively. Each interval protocol resulted in approximately 10 minutes of exercise at a workload corresponding to approximately 90% VO2max, but differed in the total rest duration. The 90/30 HIT protocol resulted in the highest VO2, HR, rating of perceived exertion, and blood lactate, whereas the 30/30 protocol resulted in the lowest of these parameters. The total caloric energy expenditure was lowest in the 90/30 and 60/30 protocols (~150 kcal), whereas the other 3 protocols did not differ (~195 kcal) from one another. The immediate postexercise blood pressure response was similar across all the protocols. These finding indicate that HIT performed at approximately 90% of VO2max is no more physiologically taxing than is steady-state exercise conducted at 70% VO2max, but the response during HIT is influenced by the work-to-rest ratio. This interval protocol may be used as an alternative approach to steady-state exercise training but with less time commitment.

  14. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

    SciTech Connect

    Peskin, M

    2004-04-22

    I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

  15. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  16. Ab initio dynamical vertex approximation

    NASA Astrophysics Data System (ADS)

    Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten

    2017-03-01

    Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.

  17. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  18. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  19. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  20. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  1. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  2. Fermion tunneling beyond semiclassical approximation

    SciTech Connect

    Majhi, Bibhas Ranjan

    2009-02-15

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  3. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  4. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  5. Topics in Metric Approximation

    NASA Astrophysics Data System (ADS)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  6. Validity of the site-averaging approximation for modeling the dissociative chemisorption of H{sub 2} on Cu(111) surface: A quantum dynamics study on two potential energy surfaces

    SciTech Connect

    Liu, Tianhui; Fu, Bina E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H. E-mail: zhangdh@dicp.ac.cn

    2014-11-21

    A new finding of the site-averaging approximation was recently reported on the dissociative chemisorption of the HCl/DCl+Au(111) surface reaction [T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 139, 184705 (2013); T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 140, 144701 (2014)]. Here, in order to investigate the dependence of new site-averaging approximation on the initial vibrational state of H{sub 2} as well as the PES for the dissociative chemisorption of H{sub 2} on Cu(111) surface at normal incidence, we carried out six-dimensional quantum dynamics calculations using the initial state-selected time-dependent wave packet approach, with H{sub 2} initially in its ground vibrational state and the first vibrational excited state. The corresponding four-dimensional site-specific dissociation probabilities are also calculated with H{sub 2} fixed at bridge, center, and top sites. These calculations are all performed based on two different potential energy surfaces (PESs). It is found that the site-averaging dissociation probability over 15 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability for H{sub 2} (v = 0) and (v = 1) on the two PESs.

  7. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  8. Approximate Qualitative Temporal Reasoning

    DTIC Science & Technology

    2001-01-01

    i.e., their boundaries can be placed in such a way that they coincide with the cell boundaries of the appropriate partition of the time-line. (Think of...respect to some appropriate partition of the time-line. For example, I felt well on Saturday. When I measured my temperature I had a fever on Monday and on...Bittner / Approximate Qualitative Temporal Reasoning 49 [27] I. A. Goralwalla, Y. Leontiev , M. T. Özsu, D. Szafron, and C. Combi. Temporal granularity for

  9. New approach to description of (d,xn) spectra at energies below 50 MeV in Monte Carlo simulation by intra-nuclear cascade code with Distorted Wave Born Approximation

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.

    2014-08-01

    A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.

  10. Intervality and coherence in complex networks.

    PubMed

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A

    2016-06-01

    Food webs-networks of predators and prey-have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis-usually identified with a "niche" dimension-has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  11. Intervality and coherence in complex networks

    NASA Astrophysics Data System (ADS)

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

    2016-06-01

    Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  12. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  13. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  14. Approximated integrability of the Dicke model

    NASA Astrophysics Data System (ADS)

    Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.

    2016-12-01

    A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.

  15. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  16. Temporal binding of interval markers

    PubMed Central

    Derichs, Christina; Zimmermann, Eckart

    2016-01-01

    How we estimate the passage of time is an unsolved mystery in neuroscience. Illusions of subjective time provide an experimental access to this question. Here we show that time compression and expansion of visually marked intervals result from a binding of temporal interval markers. Interval markers whose onset signals were artificially weakened by briefly flashing a whole-field mask were bound in time towards markers with a strong onset signal. We explain temporal compression as the consequence of summing response distributions of weak and strong onset signals. Crucially, temporal binding occurred irrespective of the temporal order of weak and strong onset markers, thus ruling out processing latencies as an explanation for changes in interval duration judgments. If both interval markers were presented together with a mask or the mask was shown in the temporal interval center, no compression occurred. In a sequence of two intervals, masking the middle marker led to time compression for the first and time expansion for the second interval. All these results are consistent with a model view of temporal binding that serves a functional role by reducing uncertainty in the final estimate of interval duration. PMID:27958311

  17. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Thompson, Bruce

    2007-01-01

    The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

  18. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

  19. [Normal confidence interval for a summary measure].

    PubMed

    Bernard, P M

    2000-10-01

    This paper proposes an approach for calculating the normal confidence interval of a weighted summary measure which requires a particular continuous transformation for its variance estimation. By using the transformation properties and applying the delta method, the variance of transformed measure is easily expressed in terms of the transformed specific measure variances and the squared weights. The confidence limits of the summary measure are easily deduced by inverse transformation of those of transformed measure. The method is illustrated by applying it to some well known epidemiological measures. It seems appropriate for application in stratified analysis context where size allows normal approximation.

  20. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  1. Children's Discrimination of Melodic Intervals.

    ERIC Educational Resources Information Center

    Schellenberg, E. Glenn; Trehub, Sandra E.

    1996-01-01

    Adults and children listened to tone sequences and were required to detect changes either from intervals with simple frequency ratios to intervals with complex ratios or vice versa. Adults performed better on changes from simple to complex ratios than on the reverse changes. Similar performance was observed for 6-year olds who had never taken…

  2. Interval Recognition in Minimal Context.

    ERIC Educational Resources Information Center

    Shatzkin, Merton

    1984-01-01

    Music majors were asked to identify interval when it was either preceded or followed by a tone moving in the same direction. Difficulties in interval recognition in context appear to be an effect not just of placement within the context or of tonality, but of particular combinations of these aspects. (RM)

  3. Teaching Confidence Intervals Using Simulation

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2008-01-01

    Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

  4. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  5. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  6. Confidence Intervals for a Mean and a Proportion in the Bounded Case.

    DTIC Science & Technology

    1986-11-01

    This paper describes a 100x(1-alpha) confidence interval for the mean of a bounded random variable which is shorter than the interval that...Chebyshev’s inequality induces for small alpha and which avoids the error of approximation that assuming normality induces. The paper also presents an analogous development for deriving a 100x(1-alpha) confidence interval for a proportion.

  7. Corrected profile likelihood confidence interval for binomial paired incomplete data.

    PubMed

    Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal

    2013-01-01

    Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood-based methods are also presented.

  8. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  9. Energy.

    ERIC Educational Resources Information Center

    Online-Offline, 1998

    1998-01-01

    This issue focuses on the theme of "Energy," and describes several educational resources (Web sites, CD-ROMs and software, videos, books, activities, and other resources). Sidebars offer features on alternative energy, animal energy, internal combustion engines, and energy from food. Subthemes include harnessing energy, human energy, and…

  10. Tuning for temporal interval in human apparent motion detection.

    PubMed

    Bours, Roger J E; Stuur, Sanne; Lankheet, Martin J M

    2007-01-08

    Detection of apparent motion in random dot patterns requires correlation across time and space. It has been difficult to study the temporal requirements for the correlation step because motion detection also depends on temporal filtering preceding correlation and on integration at the next levels. To specifically study tuning for temporal interval in the correlation step, we performed an experiment in which prefiltering and postintegration were held constant and in which we used a motion stimulus containing coherent motion for a single interval value only. The stimulus consisted of a sparse random dot pattern in which each dot was presented in two frames only, separated by a specified interval. On each frame, half of the dots were refreshed and the other half was a displaced reincarnation of the pattern generated one or several frames earlier. Motion energy statistics in such a stimulus do not vary from frame to frame, and the directional bias in spatiotemporal correlations is similar for different interval settings. We measured coherence thresholds for left-right direction discrimination by varying motion coherence levels in a Quest staircase procedure, as a function of both step size and interval. Results show that highest sensitivity was found for an interval of 17-42 ms, irrespective of viewing distance. The falloff at longer intervals was much sharper than previously described. Tuning for temporal interval was largely, but not completely, independent of step size. The optimal temporal interval slightly decreased with increasing step size. Similarly, the optimal step size decreased with increasing temporal interval.

  11. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  12. The Role of Higher Harmonics In Musical Interval Perception

    NASA Astrophysics Data System (ADS)

    Krantz, Richard; Douthett, Jack

    2011-10-01

    Using an alternative parameterization of the roughness curve we make direct use of critical band results to investigate the role of higher harmonics on the perception of tonal consonance. We scale the spectral amplitudes in the complex home tone and complex interval tone to simulate acoustic signals of constant energy. Our analysis reveals that even with a relatively small addition of higher harmonics the perfect fifth emerges as a consonant interval with more, musically important, just intervals emerging as consonant as more and more energy is shifted into higher frequencies.

  13. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  14. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  15. Eight-moment approximation solar wind models

    NASA Technical Reports Server (NTRS)

    Olsen, Espen Lyngdal; Leer, Egil

    1995-01-01

    Heat conduction from the corona is important in the solar wind energy budget. Until now all hydrodynamic solar wind models have been using the collisionally dominated gas approximation for the heat conductive flux. Observations of the solar wind show particle distribution functions which deviate significantly from a Maxwellian, and it is clear that the solar wind plasma is far from collisionally dominated. We have developed a numerical model for the solar wind which solves the full equation for the heat conductive flux together with the conservation equations for mass, momentum, and energy. The equations are obtained by taking moments of the Boltzmann equation, using an 8-moment approximation for the distribution function. For low-density solar winds the 8-moment approximation models give results which differ significantly from the results obtained in models assuming the gas to be collisionally dominated. The two models give more or less the same results in high density solar winds.

  16. Approximation methods in gravitational-radiation theory

    NASA Astrophysics Data System (ADS)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  17. Taylor Approximations and Definite Integrals

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2007-01-01

    We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)

  18. Partitioned-Interval Quantum Optical Communications Receiver

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.

    2013-01-01

    The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

  19. Confidence intervals for effect parameters common in cancer epidemiology.

    PubMed Central

    Sato, T

    1990-01-01

    This paper reviews approximate confidence intervals for some effect parameters common in cancer epidemiology. These methods have computational feasibility and give nearly nominal coverage rates. In the analysis of crude data, the simplest type of epidemiologic analysis, parameters of interest are the odds ratio in case-control studies and the rate ratio and difference in cohort studies. These parameters can estimate the instantaneous-incidence-rate ratio and difference that are the most meaningful effect measures in cancer epidemiology. Approximate confidence intervals for these parameters including the classical Cornfield's method are mainly based on efficient scores. When some confounding factors exist, stratified analysis and summary measures for effect parameters are needed. Since the Mantel-Haenszel estimators have been widely used by epidemiologists as summary measures, confidence intervals based on the Mantel-Haenszel estimators are described. The paper also discusses recent developments in these methods. PMID:2269246

  20. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  1. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  2. Finding Nested Common Intervals Efficiently

    NASA Astrophysics Data System (ADS)

    Blin, Guillaume; Stoye, Jens

    In this paper, we study the problem of efficiently finding gene clusters formalized by nested common intervals between two genomes represented either as permutations or as sequences. Considering permutations, we give several algorithms whose running time depends on the size of the actual output rather than the output in the worst case. Indeed, we first provide a straightforward O(n 3) time algorithm for finding all nested common intervals. We reduce this complexity by providing an O(n 2) time algorithm computing an irredundant output. Finally, we show, by providing a third algorithm, that finding only the maximal nested common intervals can be done in linear time. Considering sequences, we provide solutions (modifications of previously defined algorithms and a new algorithm) for different variants of the problem, depending on the treatment one wants to apply to duplicated genes.

  3. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  4. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  5. Combining global and local approximations

    SciTech Connect

    Haftka, R.T. )

    1991-09-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.

  6. Approximating Functions with Exponential Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2005-01-01

    The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

  7. High resolution time interval counter

    NASA Technical Reports Server (NTRS)

    Zhang, Victor S.; Davis, Dick D.; Lombardi, Michael A.

    1995-01-01

    In recent years, we have developed two types of high resolution, multi-channel time interval counters. In the NIST two-way time transfer MODEM application, the counter is designed for operating primarily in the interrupt-driven mode, with 3 start channels and 3 stop channels. The intended start and stop signals are 1 PPS, although other frequencies can also be applied to start and stop the count. The time interval counters used in the NIST Frequency Measurement and Analysis System are implemented with 7 start channels and 7 stop channels. Four of the 7 start channels are devoted to the frequencies of 1 MHz, 5 MHz or 10 MHz, while triggering signals to all other start and stop channels can range from 1 PPS to 100 kHz. Time interval interpolation plays a key role in achieving the high resolution time interval measurements for both counters. With a 10 MHz time base, both counters demonstrate a single-shot resolution of better than 40 ps, and a stability of better than 5 x 10(exp -12) (sigma(sub chi)(tau)) after self test of 1000 seconds). The maximum rate of time interval measurements (with no dead time) is 1.0 kHz for the counter used in the MODEM application and is 2.0 kHz for the counter used in the Frequency Measurement and Analysis System. The counters are implemented as plug-in units for an AT-compatible personal computer. This configuration provides an efficient way of using a computer not only to control and operate the counters, but also to store and process measured data.

  8. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  9. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  10. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  11. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  12. Nonequilibrium dynamical cluster approximation study of the Falicov-Kimball model

    NASA Astrophysics Data System (ADS)

    Herrmann, Andreas J.; Tsuji, Naoto; Eckstein, Martin; Werner, Philipp

    2016-12-01

    We use a nonequilibrium implementation of the dynamical cluster approximation (DCA) to study the effect of short-range correlations on the dynamics of the two-dimensional Falicov-Kimball model after an interaction quench. As in the case of single-site dynamical mean-field theory, thermalization is absent in DCA simulations, and for quenches across the metal-insulator boundary, nearest-neighbor charge correlations in the nonthermal steady state are found to be larger than in the thermal state with identical energy. We investigate to what extent it is possible to define an effective temperature of the trapped state after a quench. Based on the ratio between the lesser and retarded Green's function, we conclude that a roughly thermal distribution is reached within the energy intervals corresponding to the momentum-patch dependent subbands of the spectral function. The effectively different chemical potentials of these distributions, however, lead to a very hot, or even negative, effective temperature in the energy intervals between these subbands.

  13. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  14. Successive intervals analysis of preference measures in a health status index.

    PubMed Central

    Blischke, W R; Bush, J W; Kaplan, R M

    1975-01-01

    The method of successive intervals, a procedure for obtaining equal intervals from category data, is applied to social preference data for a health status index. Several innovations are employed, including an approximate analysis of variance test for determining whether the intervals are of equal width, a regression model for estimating the width of the end intervals in finite scales, and a transformation to equalize interval widths and estimate item locations on the new scale. A computer program has been developed to process large data sets with a larger number of categories than previous programs. PMID:1219005

  15. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1996-01-01

    Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  16. Estimation of distribution algorithms with Kikuchi approximations.

    PubMed

    Santana, Roberto

    2005-01-01

    The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.

  17. Approximating subtree distances between phylogenies.

    PubMed

    Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina

    2006-10-01

    We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.

  18. Dual approximations in optimal control

    NASA Technical Reports Server (NTRS)

    Hager, W. W.; Ianculescu, G. D.

    1984-01-01

    A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.

  19. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  20. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  1. Energy

    DTIC Science & Technology

    2003-01-01

    Canada, Britain, and Spain. We found that the energy industry is not in crisis ; however, U.S. government policies, laws, dollars, and even public...CEIMAT (Centro de Investagaciones Energeticas , Medioambeintales y Tecnologicas) Research and development Page 3 of 28ENERGY 8/10/04http://www.ndu.edu...procurement or storage of standard, common use fuels. NATURAL GAS Natural gas, abundant globally and domestically, offers energy versatility among

  2. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  3. An Event Restriction Interval Theory of Tense

    ERIC Educational Resources Information Center

    Beamer, Brandon Robert

    2012-01-01

    This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

  4. Closed-form fiducial confidence intervals for some functions of independent binomial parameters with comparisons.

    PubMed

    Krishnamoorthy, K; Lee, Meesook; Zhang, Dan

    2017-02-01

    Approximate closed-form confidence intervals (CIs) for estimating the difference, relative risk, odds ratio, and linear combination of proportions are proposed. These CIs are developed using the fiducial approach and the modified normal-based approximation to the percentiles of a linear combination of independent random variables. These confidence intervals are easy to calculate as the computation requires only the percentiles of beta distributions. The proposed confidence intervals are compared with the popular score confidence intervals with respect to coverage probabilities and expected widths. Comparison studies indicate that the proposed confidence intervals are comparable with the corresponding score confidence intervals, and better in some cases, for all the problems considered. The methods are illustrated using several examples.

  5. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  6. JIMWLK evolution in the Gaussian approximation

    NASA Astrophysics Data System (ADS)

    Iancu, E.; Triantafyllopoulos, D. N.

    2012-04-01

    We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

  7. Practical Scheffe-type credibility intervals for variables of a groundwater model

    USGS Publications Warehouse

    Cooley, R.L.

    1999-01-01

    Simultaneous Scheffe-type credibility intervals (the Bayesian version of confidence intervals) for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were derived by Cooley [1993b]. It was assumed that variances reflecting the expected differences between observed and model-computed quantities used to calibrate the model are known, whereas they would often be unknown for an actual model. In this study the variances are regarded as unknown, and variance variability from observation to observation is approximated by grouping the data so that each group is characterized by a uniform variance. The credibility intervals are calculated from the posterior distribution, which was developed by considering each group variance to be a random variable about which nothing is known a priori, then eliminating it by integration. Numerical experiments using two test problems illustrate some characteristics of the credibility intervals. Nonlinearity of the statistical model greatly affected some of the credibility intervals, indicating that credibility intervals computed using the standard linear model approximation may often be inadequate to characterize uncertainty for actual field problems. The parameter characterizing the probability level for the credibility intervals was, however, accurately computed using a linear model approximation, as compared with values calculated using second-order and fully nonlinear formulations. This allows the credibility intervals to be computed very efficiently.Simultaneous Scheffe-type credibility intervals for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were developed. The variances reflecting the expected differences between the observed and model-computed quantities were unknown, and variance variability from observation to observation was approximated by grouping the data so that each group was characterized by a uniform variance. Nonlinearity

  8. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  9. A COMPARISON OF CONFIDENCE INTERVAL PROCEDURES IN CENSORED LIFE TESTING PROBLEMS.

    DTIC Science & Technology

    Obtaining a confidence interval for a parameter lambda of an exponential distribution is a frequent occurrence in life testing problems. Oftentimes...the test plan used is one in which all the observations are censored at the same time point. Several approximate confidence interval procedures are

  10. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  11. Rational approximations for tomographic reconstructions

    NASA Astrophysics Data System (ADS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-06-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.

  12. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  13. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  14. Approximating spatially exclusive invasion processes

    NASA Astrophysics Data System (ADS)

    Ross, Joshua V.; Binder, Benjamin J.

    2014-05-01

    A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.

  15. Heat pipe transient response approximation.

    SciTech Connect

    Reid, R. S.

    2001-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.

  16. Second Approximation to Conical Flows

    DTIC Science & Technology

    1950-12-01

    Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs

  17. Digital redesign of uncertain interval systems based on time-response resemblance via particle swarm optimization.

    PubMed

    Hsu, Chen-Chien; Lin, Geng-Yu

    2009-07-01

    In this paper, a particle swarm optimization (PSO) based approach is proposed to derive an optimal digital controller for redesigned digital systems having an interval plant based on time-response resemblance of the closed-loop systems. Because of difficulties in obtaining time-response envelopes for interval systems, the design problem is formulated as an optimization problem of a cost function in terms of aggregated deviation between the step responses corresponding to extremal energies of the redesigned digital system and those of their continuous counterpart. A proposed evolutionary framework incorporating three PSOs is subsequently presented to minimize the cost function to derive an optimal set of parameters for the digital controller, so that step response sequences corresponding to the extremal sequence energy of the redesigned digital system suitably approximate those of their continuous counterpart under the perturbation of the uncertain plant parameters. Computer simulations have shown that redesigned digital systems incorporating the PSO-derived digital controllers have better system performance than those using conventional open-loop discretization methods.

  18. Existence and uniqueness results for neural network approximations.

    PubMed

    Williamson, R C; Helmke, U

    1995-01-01

    Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.

  19. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  20. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    SciTech Connect

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

  1. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  2. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  3. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  4. Relativistic Random Phase Approximation At Finite Temperature

    SciTech Connect

    Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.

    2009-08-26

    The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.

  5. Scaling and memory in the return intervals of realized volatility

    NASA Astrophysics Data System (ADS)

    Ren, Fei; Gu, Gao-Feng; Zhou, Wei-Xing

    2009-11-01

    We perform return interval analysis of 1-min realized volatility defined by the sum of absolute high-frequency intraday returns for the Shanghai Stock Exchange Composite Index (SSEC) and 22 constituent stocks of SSEC. The scaling behavior and memory effect of the return intervals between successive realized volatilities above a certain threshold q are carefully investigated. In comparison with the volatility defined by the closest tick prices to the minute marks, the return interval distribution for the realized volatility shows a better scaling behavior since 20 stocks (out of 22 stocks) and the SSEC pass the Kolmogorov-Smirnov (KS) test and exhibit scaling behaviors, among which the scaling function for 8 stocks could be approximated well by a stretched exponential distribution revealed by the KS goodness-of-fit test under the significance level of 5%. The improved scaling behavior is further confirmed by the relation between the fitted exponent γ and the threshold q. In addition, the similarity of the return interval distributions for different stocks is also observed for the realized volatility. The investigation of the conditional probability distribution and the detrended fluctuation analysis (DFA) show that both short-term and long-term memory exists in the return intervals of realized volatility.

  6. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  7. Potential of the approximation method

    SciTech Connect

    Amano, K.; Maruoka, A.

    1996-12-31

    Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.

  8. Dissimilar Physiological and Perceptual Responses Between Sprint Interval Training and High-Intensity Interval Training.

    PubMed

    Wood, Kimberly M; Olive, Brittany; LaValle, Kaylyn; Thompson, Heather; Greer, Kevin; Astorino, Todd A

    2016-01-01

    High-intensity interval training (HIIT) and sprint interval training (SIT) elicit similar cardiovascular and metabolic adaptations vs. endurance training. No study, however, has investigated acute physiological changes during HIIT vs. SIT. This study compared acute changes in heart rate (HR), blood lactate concentration (BLa), oxygen uptake (VO2), affect, and rating of perceived exertion (RPE) during HIIT and SIT. Active adults (4 women and 8 men, age = 24.2 ± 6.2 years) initially performed a VO2max test to determine workload for both sessions on the cycle ergometer, whose order was randomized. Sprint interval training consisted of 8 bouts of 30 seconds of all-out cycling at 130% of maximum Watts (Wmax). High-intensity interval training consisted of eight 60-second bouts at 85% Wmax. Heart rate, VO2, BLa, affect, and RPE were continuously assessed throughout exercise. Repeated-measures analysis of variance revealed a significant difference between HIIT and SIT for VO2 (p < 0.001), HR (p < 0.001), RPE (p = 0.03), and BLa (p = 0.049). Conversely, there was no significant difference between regimens for affect (p = 0.12). Energy expenditure was significantly higher (p = 0.02) in HIIT (209.3 ± 40.3 kcal) vs. SIT (193.5 ± 39.6 kcal). During HIIT, subjects burned significantly more calories and reported lower perceived exertion than SIT. The higher VO2 and lower BLa in HIIT vs. SIT reflected dissimilar metabolic perturbation between regimens, which may elicit unique long-term adaptations. If an individual is seeking to burn slightly more calories, maintain a higher oxygen uptake, and perceive less exertion during exercise, HIIT is the recommended routine.

  9. Nonlinear Filtering and Approximation Techniques

    DTIC Science & Technology

    1991-09-01

    Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989

  10. Reliable Function Approximation and Estimation

    DTIC Science & Technology

    2016-08-16

    Journal on Mathematical Analysis 47 (6), 2015. 4606-4629. (P3) The Sample Complexity of Weighted Sparse Approximation. B. Bah and R. Ward. IEEE...solving systems of quadratic equations. S. Sanghavi, C. White, and R. Ward. Results in Mathematics , 2016. (O5) Relax, no need to round: Integrality of...Theoretical Computer Science. (O6) A unified framework for linear dimensionality reduction in L1. F Krahmer and R Ward. Results in Mathematics , 2014. 1-23

  11. Energy.

    ERIC Educational Resources Information Center

    Shanebrook, J. Richard

    This document describes a course designed to acquaint students with the many societal and technological problems facing the United States and the world due to the increasing demand for energy. The course begins with a writing assignment that involves readings on the environmental philosophy of Native Americans and the Chernobyl catastrophe.…

  12. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  13. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  14. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  15. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  16. The factors influence compatibility of pulse-pulse intervals with R-R intervals.

    PubMed

    Liu, An-Bang; Wu, Hsien-Tsai; Liu, Cyuan-Cin; Hsu, Chun-Hsiang; Chen, Ding-Yuan

    2013-01-01

    Cardiac autonomic dysfunction assessed by power spectral analysis of electrocardigographic (ECG) R-R intervals (RRI) is a useful method in clinical research. The compatibility of pulse-pulse intervals (PPI) acquired by photoplethysmography (PPG) with RRI is equivocal. In this study, we would like to investigate factors influence the compatibility. We recruited 25 young and health subjects divided into two groups: normal subjects (Group1, BMI < 24, n=15) and overweight subjects (Group2, BMI >/= 24, n=10). ECG and PPG were measured for 5 minutes. Used cross-approximate entropy (CAE) and Fast Fourier transform (FFT) to obtained compatibility between RRI and PPI. The CAE value in Group1 were significantly lower than in Group2 (1.71 ± 0.12 vs. 1.83 ± 0.11, P = 0.011). A positive linear relationship between CAE value and risk factors of metabolic syndrome. No significantly difference between LFP/HFP ratio of RRI (LHRRRI) and LFP/HFP ratio of PPI (LHRPPI) in Group1 (1.42 ± 0.19 vs. 1.38 ± 0.17, P = 0.064), LHRRRI significantly higher than LHRPPI in Group2 (2.18 ± 0.37 vs. 1.93 ± 0.30, P = 0.005). It should be careful that using PPI to assess autonomic function in the obese subjects or the patients with metabolic syndrome.

  17. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  18. Min and Max Extreme Interval Values

    ERIC Educational Resources Information Center

    Jance, Marsha L.; Thomopoulos, Nick T.

    2011-01-01

    The paper shows how to find the min and max extreme interval values for the exponential and triangular distributions from the min and max uniform extreme interval values. Tables are provided to show the min and max extreme interval values for the uniform, exponential, and triangular distributions for different probabilities and observation sizes.

  19. Familiarity-Frequency Ratings of Melodic Intervals

    ERIC Educational Resources Information Center

    Jeffries, Thomas B.

    1972-01-01

    Objective of this study was to determine subjects' reliability in rating randomly played ascending and descending melodic intervals within the octave on the basis of their familiarity with each type of interval and the frequency of their having experienced each type of interval in music. (Author/CB)

  20. Communication: Improved pair approximations in local coupled-cluster methods

    NASA Astrophysics Data System (ADS)

    Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim

    2015-03-01

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  1. Reinforcing value of interval and continuous physical activity in children.

    PubMed

    Barkley, Jacob E; Epstein, Leonard H; Roemmich, James N

    2009-08-04

    During play children engage in short bouts of intense activity, much like interval training. This natural preference for interval-type activity may have important implications for prescribing the most motivating type of physical activity, but the motivation of children to be physically active in interval or continuous fashion has not yet been examined. In the present study, ventilatory threshold (VT) and VO2 peak were determined in boys (n=16) and girls (n=16) age 10+/-1.3 years. Children sampled interval and continuous constant-load physical activity protocols on a cycle ergometer at 20% VT on another day. The physical activity protocols were matched for energy expenditure. Children then completed an operant button pressing task using a progressive fixed ratio schedule to assess the relative reinforcing value (RRV) of interval versus continuous physical activity. The number of button presses performed to gain access in interval or continuous physical activity and output maximum (O(max)) were the primary outcome variables. Children performed more button presses (P<0.005) and had a greater O(max) (P<0.005) when working to gain access to interval compared to continuous physical activity at intensities >VT and interval-type physical activity was more reinforcing than continuous constant-load physical activity for children when exercising both >VT and

  2. Interpregnancy interval and obstetrical complications.

    PubMed

    Shachar, Bat Zion; Lyell, Deirdre J

    2012-09-01

    Obstetricians are often presented with questions regarding the optimal interpregnancy interval (IPI). Short IPI has been associated with adverse perinatal and maternal outcomes, ranging from preterm birth and low birth weight to neonatal and maternal morbidity and mortality. Long IPI has in turn been associated with increased risk for preeclampsia and labor dystocia. In this review, we discuss the data regarding these associations along with recent studies revealing associations of short IPI with birth defects, schizophrenia, and autism. The optimal IPI may vary for different subgroups. We discuss the consequences of short IPI in women with a prior cesarean section, in particular the increased risk for uterine rupture and the considerations regarding a trial of labor in this subgroup. We review studies examining the interaction between short IPI and advanced maternal age and discuss the risk-benefit assessment for these women. Finally, we turn our attention to women after a stillbirth or an abortion, who often desire to conceive again with minimal delay. We discuss studies speaking in favor of a shorter IPI in this group. The accumulated data allow for the reevaluation of current IPI recommendations and management guidelines for women in general and among subpopulations with special circumstances. In particular, we suggest lowering the current minimal IPI recommendation to only 18 months (vs 24 months according to the latest World Health Organization recommendations), with even shorter recommended minimal IPI for women of advanced age and those who conceive after a spontaneous or induced abortion.

  3. Variational extensions of the mean spherical approximation

    NASA Astrophysics Data System (ADS)

    Blum, L.; Ubriaco, M.

    2000-04-01

    In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.

  4. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  5. Approximate transferability in conjugated polyalkenes

    NASA Astrophysics Data System (ADS)

    Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.

    2007-03-01

    QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.

  6. Coulomb glass in the random phase approximation

    NASA Astrophysics Data System (ADS)

    Basylko, S. A.; Onischouk, V. A.; Rosengren, A.

    2002-01-01

    A three-dimensional model of the electrons localized on randomly distributed donor sites of density n and with the acceptor charge uniformly smeared on these sites, -Ke on each, is considered in the random phase approximation (RPA). For the case K=1/2 the free energy, the density of the one-site energies (DOSE) ɛ, and the pair OSE correlators are found. In the high-temperature region (e2n1/3/T)<1 (T is the temperature) RPA energies and DOSE are in a good agreement with the corresponding data of Monte Carlo simulations. Thermodynamics of the model in this region is similar to the one of an electrolyte in the regime of Debye screening. In the vicinity of the Fermi level μ=0 the OSE correlations, depending on sgn(ɛ1.ɛ2) and with very slow decoupling law, have been found. The main result is that even in the temperature range where the energy of a Coulomb glass is determined by Debye screening effects, the correlations of the long-range nature between the OSE still exist.

  7. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  8. Laguerre approximation of random foams

    NASA Astrophysics Data System (ADS)

    Liebscher, André

    2015-09-01

    Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.

  9. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2012-01-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  10. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2011-12-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  11. Analytical approximations for spiral waves

    SciTech Connect

    Löber, Jakob Engel, Harald

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  12. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  13. Analytical approximations for spiral waves.

    PubMed

    Löber, Jakob; Engel, Harald

    2013-12-01

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  14. Indexing the approximate number system.

    PubMed

    Inglis, Matthew; Gilmore, Camilla

    2014-01-01

    Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.

  15. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  16. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  17. High-Intensity Interval Exercise and Postprandial Triacylglycerol.

    PubMed

    Burns, Stephen F; Miyashita, Masashi; Stensel, David J

    2015-07-01

    This review examined if high-intensity interval exercise (HIIE) reduces postprandial triacylglycerol (TAG) levels. Fifteen studies were identified, in which the effect of interval exercise conducted at an intensity of >65% of maximal oxygen uptake was evaluated on postprandial TAG levels. Analysis was divided between studies that included supramaximal exercise and those that included submaximal interval exercise. Ten studies examined the effect of a single session of low-volume HIIE including supramaximal sprints on postprandial TAG. Seven of these studies noted reductions in the postprandial total TAG area under the curve the morning after exercise of between ~10 and 21% compared with rest, but three investigations found no significant difference in TAG levels. Variations in the HIIE protocol used, inter-individual variation or insufficient time post-exercise for an increase in lipoprotein lipase activity are proposed reasons for the divergent results among studies. Five studies examined the effect of high-volume submaximal interval exercise on postprandial TAG. Four of these studies were characterised by high exercise energy expenditure and effectively attenuated total postprandial TAG levels by ~15-30%, but one study with a lower energy expenditure found no effect on TAG. The evidence suggests that supramaximal HIIE can induce large reductions in postprandial TAG levels but findings are inconsistent. Submaximal interval exercise offers no TAG metabolic or time advantage over continuous aerobic exercise but could be appealing in nature to some individuals. Future research should examine if submaximal interval exercise can reduce TAG levels in line with more realistic and achievable exercise durations of 30 min per day.

  18. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  19. When Density Functional Approximations Meet Iron Oxides.

    PubMed

    Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong

    2016-10-11

    Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe2O3, Fe3O4, and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.

  20. Confidence interval based parameter estimation--a new SOCR applet and activity.

    PubMed

    Christou, Nicolas; Dinov, Ivo D

    2011-01-01

    Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu). Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate), as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.

  1. Capacitated max -Batching with Interval Graph Compatibilities

    NASA Astrophysics Data System (ADS)

    Nonner, Tim

    We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the weight of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving a long-standing open problem, we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case.

  2. Facies and reservoir characterization of an upper Smackover interval, East Barnett Field, Conecuh County, Alabama

    SciTech Connect

    Bergan, G.R. ); Hearne, J.H. )

    1990-09-01

    Excellent production from an upper Smackover (Jurassic) ooid grainstone was established in April 1988 by Coastal Oil and Gas Corporation with the discovery of the East Barnett field in Conecuh County, Alabama. A structure map on the top of the Smackover Formation and net porosity isopach map of the producing intervals show that the trapping mechanism at the field has both structural and stratigraphic components. Two diamond cores were cut from 13,580 to 13,701 ft, beginning approximately 20 ft below the top of the Smackover. Two shallowing-upward sequences are identified in the cores. The first sequence starts at the base of the cored interval and is characterized by thick, subtidal algal boundstones capped by a collapse breccia facies. This entire sequence was deposited in the shallow subtidal to lower intertidal zone. Subsequent lowering of sea level exposed the top portion of the boundstones to meteoric or mixing zone waters, creating the diagenetic, collapse breccia facies. The anhydrite associated with the breccia also indicates surface exposure. The second sequence begins with algal boundstones that sharply overlie the collapse breccia facies of the previous sequence. These boundstones grade upward into high-energy, cross-bedded ooid beach ( ) and oncoidal, peloidal beach shoreface deposits. Proximity of the overlying Buckner anhydrite, representing a probable sabkha system, favors a beach or a very nearshore shoal interpretation for the ooid grainstones. The ooid grainstone facies, which is the primary producing interval, has measured porosity values ranging from 5.3% to 17.8% and averaging 11.0%. Measured permeability values range from 0.04 md to 701 md and average 161.63 md. These high porosity and permeability values result from abundant primary intergranular pore space, as well as secondary pore space created by dolomitization and dissolution of framework grains.

  3. Interval Management Display Design Study

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Beyer, Timothy M.; Cooke, Stuart D.; Grant, Karlus A.

    2014-01-01

    In 2012, the Federal Aviation Administration (FAA) estimated that U.S. commercial air carriers moved 736.7 million passengers over 822.3 billion revenue-passenger miles. The FAA also forecasts, in that same report, an average annual increase in passenger traffic of 2.2 percent per year for the next 20 years, which approximates to one-and-a-half times the number of today's aircraft operations and passengers by the year 2033. If airspace capacity and throughput remain unchanged, then flight delays will increase, particularly at those airports already operating near or at capacity. Therefore it is critical to create new and improved technologies, communications, and procedures to be used by air traffic controllers and pilots. National Aeronautics and Space Administration (NASA), the FAA, and the aviation industry are working together to improve the efficiency of the National Airspace System and the cost to operate in it in several ways, one of which is through the creation of the Next Generation Air Transportation System (NextGen). NextGen is intended to provide airspace users with more precise information about traffic, routing, and weather, as well as improve the control mechanisms within the air traffic system. NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) Project is designed to contribute to the goals of NextGen, and accomplishes this by integrating three NASA technologies to enable fuel-efficient arrival operations into high-density airports. The three NASA technologies and procedures combined in the ATD-1 concept are advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted and fuel efficient arrival streams in high-density terminal airspace.

  4. Confidence Interval Procedures for Reliability Growth Analysis

    DTIC Science & Technology

    1977-06-01

    Plj2s tSAA - TECHNICAL RPORT NO. 197 CONFIDENCE INTERVAL PROCEDURES FOR RELIABILITY, GROWTH ANALYSIS LARRY H. CROW JUNE 1977 APPROVED FOR PUBLIC...dence Intervals for M(T). ¶-. fl [ ] 1 Siion IIS0III0N/AVAI Ale ITY ClOtS Next page is blank. So3 CONFIDENCE INTERVAL PROCIEDURIS• FOR RELTABILITY...and confidence interval procedures for the parameters B and P = X are presented in [l , [2], [4]. In the application of the Weibull process model to

  5. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  6. Femtolensing: Beyond the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Ulmer, Andrew; Goodman, Jeremy

    1995-01-01

    Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.

  7. Magnus approximation in neutrino oscillations

    NASA Astrophysics Data System (ADS)

    Acero, Mario A.; Aguilar-Arevalo, Alexis A.; D'Olivo, J. C.

    2011-04-01

    Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.

  8. Approximating Densities of States with Gaps

    NASA Astrophysics Data System (ADS)

    Haydock, Roger; Nex, C. M. M.

    2011-03-01

    Reconstructing a density of states or similar distribution from moments or continued fractions is an important problem in calculating the electronic and vibrational structure of defective or non-crystalline solids. For single bands a quadratic boundary condition introduced previously [Phys. Rev. B 74, 205121 (2006)] produces results which compare favorably with maximum entropy and even give analytic continuations of Green functions to the unphysical sheet. In this paper, the previous boundary condition is generalized to an energy-independent condition for densities with multiple bands separated by gaps. As an example it is applied to a chain of atoms with s, p, and d bands of different widths with different gaps between them. The results are compared with maximum entropy for different levels of approximation. Generalized hypergeometric functions associated with multiple bands satisfy the new boundary condition exactly. Supported by the Richmond F. Snyder Fund.

  9. Approximate Sensory Data Collection: A Survey.

    PubMed

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-03-10

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.

  10. Approximate Sensory Data Collection: A Survey

    PubMed Central

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-01-01

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440

  11. QT-Interval Duration and Mortality Rate

    PubMed Central

    Zhang, Yiyi; Post, Wendy S.; Dalal, Darshan; Blasco-Colmenares, Elena; Tomaselli, Gordon F.; Guallar, Eliseo

    2012-01-01

    Background Extreme prolongation or reduction of the QT interval predisposes patients to malignant ventricular arrhythmias and sudden cardiac death, but the association of variations in the QT interval within a reference range with mortality end points in the general population is unclear. Methods We included 7828 men and women from the Third National Health and Nutrition Examination Survey. Baseline QT interval was measured via standard 12-lead electrocardiographic readings. Mortality end points were assessed through December 31, 2006 (2291 deaths). Results After an average follow-up of 13.7 years, the association between QT interval and mortality end points was U-shaped. The multivariate-adjusted hazard ratios comparing participants at or above the 95th percentile of age-, sex-, race-, and R-R interval–corrected QT interval (≥439 milliseconds) with participants in the middle quintile (401 to <410 milliseconds) were 2.03 (95% confidence interval, 1.46-2.81) for total mortality, 2.55 (1.59-4.09) for mortality due to cardiovascular disease (CVD), 1.63 (0.96-2.75) for mortality due to coronary heart disease, and 1.65 (1.16-2.35) for non-CVD mortality. The corresponding hazard ratios comparing participants with a corrected QT interval below the fifth percentile (<377 milliseconds) with those in the middle quintile were 1.39 (95% confidence interval, 1.02-1.88) for total mortality, 1.35 (0.77-2.36) for CVD mortality, 1.02 (0.44-2.38) for coronary heart disease mortality, and 1.42 (0.97-2.08) for non-CVD mortality. Increased mortality also was observed with less extreme deviations of QT-interval duration. Similar, albeit weaker, associations also were observed with Bazett-corrected QT intervals. Conclusion Shortened and prolonged QT-interval durations, even within a reference range, are associated with increased mortality risk in the general population. PMID:22025428

  12. Analytic Interatomic Forces in the Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Ramberger, Benjamin; Schäfer, Tobias; Kresse, Georg

    2017-03-01

    We discuss that in the random phase approximation (RPA) the first derivative of the energy with respect to the Green's function is the self-energy in the G W approximation. This relationship allows us to derive compact equations for the RPA interatomic forces. We also show that position dependent overlap operators are elegantly incorporated in the present framework. The RPA force equations have been implemented in the projector augmented wave formalism, and we present illustrative applications, including ab initio molecular dynamics simulations, the calculation of phonon dispersion relations for diamond and graphite, as well as structural relaxations for water on boron nitride. The present derivation establishes a concise framework for forces within perturbative approaches and is also applicable to more involved approximations for the correlation energy.

  13. Fault Detection and Isolation using Viability Theory and Interval Observers

    NASA Astrophysics Data System (ADS)

    Ghaniee Zarch, Majid; Puig, Vicenç; Poshtan, Javad

    2017-01-01

    This paper proposes the use of interval observers and viability theory in fault detection and isolation (FDI). Viability theory develops mathematical and algorithmic methods for investigating the adaptation to viability constraints of evolutions governed by complex systems under uncertainty. These methods can be used for checking the consistency between observed and predicted behavior by using simple sets that approximate the exact set of possible behavior (in the parameter or state space). In this paper, fault detection is based on checking for an inconsistency between the measured and predicted behaviors using viability theory concepts and sets. Finally, an example is provided in order to show the usefulness of the proposed approach.

  14. Interval and Contour Processing in Autism

    ERIC Educational Resources Information Center

    Heaton, Pamela

    2005-01-01

    High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…

  15. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  16. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

    EPA Science Inventory

    Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

  17. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  18. Improved interval estimation of comparative treatment effects

    NASA Astrophysics Data System (ADS)

    Van Krevelen, Ryne Christian

    Comparative experiments, in which subjects are randomized to one of two treatments, are performed often. There is no shortage of papers testing whether a treatment effect exists and providing confidence intervals for the magnitude of this effect. While it is well understood that the object and scope of inference for an experiment will depend on what assumptions are made, these entities are not always clearly presented. We have proposed one possible method, which is based on the ideas of Jerzy Neyman, that can be used for constructing confidence intervals in a comparative experiment. The resulting intervals, referred to as Neyman-type confidence intervals, can be applied in a wide range of cases. Special care is taken to note which assumptions are made and what object and scope of inference are being investigated. We have presented a notation that highlights which parts of a problem are being treated as random. This helps ensure the focus on the appropriate scope of inference. The Neyman-type confidence intervals are compared to possible alternatives in two different inference settings: one in which inference is made about the units in the sample and one in which inference is made about units in a fixed population. A third inference setting, one in which inference is made about a process distribution, is also discussed. It is stressed that certain assumptions underlying this third type of inference are unverifiable. When these assumptions are not met, the resulting confidence intervals may cover their intended target well below the desired rate. Through simulation, we demonstrate that the Neyman-type intervals have good coverage properties when inference is being made about a sample or a population. In some cases the alternative intervals are much wider than necessary on average. Therefore, we recommend that researchers consider using our Neyman-type confidence intervals when carrying out inference about a sample or a population as it may provide them with more

  19. Dynamical nonlocal coherent-potential approximation for itinerant electron magnetism.

    PubMed

    Rowlands, D A; Zhang, Yu-Zhong

    2014-11-26

    A dynamical generalisation of the nonlocal coherent-potential approximation is derived based upon the functional integral approach to the interacting electron problem. The free energy is proven to be variational with respect to the self-energy provided a self-consistency condition on a cluster of sites is satisfied. In the present work, calculations are performed within the static approximation and the effect of the nonlocal physics on the formation of the local moment state in a simple model is investigated. The results reveal the importance of the dynamical correlations.

  20. A novel nonparametric confidence interval for differences of proportions for correlated binary data.

    PubMed

    Duan, Chongyang; Cao, Yingshu; Zhou, Lizhi; Tan, Ming T; Chen, Pingyan

    2016-11-16

    Various confidence interval estimators have been developed for differences in proportions resulted from correlated binary data. However, the width of the mostly recommended Tango's score confidence interval tends to be wide, and the computing burden of exact methods recommended for small-sample data is intensive. The recently proposed rank-based nonparametric method by treating proportion as special areas under receiver operating characteristic provided a new way to construct the confidence interval for proportion difference on paired data, while the complex computation limits its application in practice. In this article, we develop a new nonparametric method utilizing the U-statistics approach for comparing two or more correlated areas under receiver operating characteristics. The new confidence interval has a simple analytic form with a new estimate of the degrees of freedom of n - 1. It demonstrates good coverage properties and has shorter confidence interval widths than that of Tango. This new confidence interval with the new estimate of degrees of freedom also leads to coverage probabilities that are an improvement on the rank-based nonparametric confidence interval. Comparing with the approximate exact unconditional method, the nonparametric confidence interval demonstrates good coverage properties even in small samples, and yet they are very easy to implement computationally. This nonparametric procedure is evaluated using simulation studies and illustrated with three real examples. The simplified nonparametric confidence interval is an appealing choice in practice for its ease of use and good performance.

  1. [Confidence interval calculation for small numbers of observations or no observations at all].

    PubMed

    Harari, Gil; Herbst, Shimrit

    2014-05-01

    Confidence interval calculation is a common statistics measure, which is frequently used in the statistical analysis of studies in medicine and life sciences. A confidence interval specifies a range of values within which the unknown population parameter may lie. In most situations, especially those involving normally-distributed data or large samples of data from other distributions, the normal approximation may be used to calculate the confidence interval. But, if the number of observed cases is small or zero, we recommend that the confidence interval be calculated in more appropriate ways. In such cases, for example, in clinical trials where the number of observed adverse events is small, the criterion for approximate normality is calculated. Confidence intervals are calculated with the use of the approximated normal distribution if this criterion is met, and with the use of the exact binomial distribution if not. This article, accompanied by examples, describes the criteria in which the common and known method cannot be used as well as the stages and methods required to calculate confidence intervals in studies with a small number of observations.

  2. Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  3. Estimation of postmortem interval based on colony development time for Anoplolepsis longipes (Hymenoptera: Formicidae).

    PubMed

    Goff, M L; Win, B H

    1997-11-01

    The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.

  4. Bootstrap confidence intervals in a complex situation: A sequential paired clinical trial

    SciTech Connect

    Morton, S.C.

    1988-06-01

    This paper considers the problem of determining a confidence interval for the difference between two treatments in a simplified sequential paired clinical trial, which is analogous to setting an interval for the drift of a random walk subject to a parabolic stopping boundary. Three bootstrap methods of construction are applied: Efron's accelerated bias-covered, the DiCiccio-Romano, and the bootstrap-t. The results are compared with a theoretical approximate interval due to Siegmund. Difficulties inherent in the use of these bootstrap methods in a complex situations are illustrated. The DiCiccio-Romano method is shown to be the easiest to apply and to work well. 13 refs.

  5. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  6. Interval Estimates of Multivariate Effect Sizes: Coverage and Interval Width Estimates under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Hess, Melinda R.; Hogarty, Kristine Y.; Ferron, John M.; Kromrey, Jeffrey D.

    2007-01-01

    Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D[superscript 2]), two bias-adjusted estimates of D[superscript 2], and Huberty's…

  7. The microanalysis of fixed-interval responding

    PubMed Central

    Gentry, G. David; Weiss, Bernard; Laties, Victor G.

    1983-01-01

    The fixed-interval schedule of reinforcement is one of the more widely studied schedules in the experimental analysis of behavior and is also a common baseline for behavior pharmacology. Despite many intensive studies, the controlling variables and the pattern of behavior engendered are not well understood. The present study examined the microstructure and superstructure of the behavior engendered by a fixed-interval 5- and a fixed-interval 15-minute schedule of food reinforcement in the pigeon. Analysis of performance typical of fixed-interval responding indicated that the scalloped pattern does not result from smooth acceleration in responding, but, rather, from renewed pausing early in the interval. Individual interresponse-time (IRT) analyses provided no evidence of acceleration. There was a strong indication of alternation in shorter-longer IRTs, but these shorter-longer IRTs did not occur at random, reflecting instead a sequential dependency in successive IRTs. Furthermore, early in the interval there was a high relative frequency of short IRTs. Such a pattern of early pauses and short IRTs does not suggest behavior typical of reinforced responding as exemplified by the pattern found near the end of the interval. Thus, behavior from clearly scalloped performance can be classified into three states: postreinforcement pause, interim behavior, and terminal behavior. PMID:16812324

  8. Microanalysis of fixed-interval responding

    SciTech Connect

    Gentry, G.D.; Weiss, B.; Laties, V.G.

    1983-03-01

    The fixed-interval schedule of reinforcement is one of the more widely studied schedules in the experimental analysis of behavior and is also a common baseline for behavior pharmacology. Despite many intensive studies, the controlling variables and the pattern of behavior engendered are not well understood. The present study examined the microstructure and superstructure of the behavior engendered by a fixed-interval 5- and a fixed-interval 15-minute schedule of food reinforcement in the pigeon. Analysis of performance typical of fixed-interval responding indicated that the scalloped pattern does not result from smooth acceleration in responding, but, rather, from renewed pausing early in the interval. Individual interresponse-time (IRT) analyses provided no evidence of acceleration. There was a strong indication of alternation is shorter-longer IRTs, but these shorter-longer IRTs did not occur at random, reflecting instead a sequential dependency in successive IRTs. Furthermore, early in the interval there was a high relative frequency of short IRTs. Such a pattern of early pauses and short IRTs does not suggest behavior typical of reinforced responding as exemplified by the pattern found near the end of the interval. Thus, behavior from clearly scalloped performance can be classified into three states: postreinforcement pause, interim behavior, and terminal behavior. 31 references, 11 figures, 4 tables.

  9. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  10. The Total Interval of a Graph.

    DTIC Science & Technology

    1988-01-01

    definitions for all of these clases . A Husimi tree is a graph for which every block is a clique. A cactus is a graph for which every edge is in at most one...proportion of graphs with n vertices that we can represent with q(n) intervals is at most n-2 and this approaches zero as n gets large . Hence the...representations will have relatively few intervals of small depth and relatively many intervals of large depth. It is nevertheless often useful to restrict

  11. Advanced Interval Management: A Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  12. Learned interval time facilitates associate memory retrieval

    PubMed Central

    van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter

    2017-01-01

    The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue–time–target associations between cue–target pairs and specific cue–target intervals. During subsequent memory testing, participants showed increased accuracy of identifying matching cue–target pairs if the time interval during testing matched the implicitly learned interval. A control experiment showed that participants had no explicit knowledge about the cue–time associations. We suggest that “elapsed time” can act as a temporal mnemonic associate that can facilitate retrieval of events associated in memory. PMID:28298554

  13. Visual feedback for retuning to just intonation intervals

    NASA Astrophysics Data System (ADS)

    Ayers, R. Dean; Nordquist, Peter R.; Corn, Justin S.

    2005-04-01

    Musicians become used to equal temperament pitch intervals due to their widespread use in tuning pianos and other fixed-pitch instruments. For unaccompanied singing and some other performance situations, a more harmonious blending of sounds can be achieved by shifting to just intonation intervals. Lissajous figures provide immediate and striking visual feedback that emphasizes the frequency ratios and pitch intervals found among the first few members of a single harmonic series. Spirograph patterns (hypotrochoids) are also especially simple for ratios of small whole numbers, and their use for providing feedback to singers has been suggested previously [G. W. Barton, Jr., Am. J. Phys. 44(6), 593-594 (1976)]. A hybrid mixture of these methods for comparing two frequencies generates what appears to be a three dimensional Lissajous figure-a cylindrical wire mesh that rotates about its tilted vertical axis, with zero tilt yielding the familiar Lissajous figure. Sine wave inputs work best, but the sounds of flute, recorder, whistling, and a sung ``oo'' are good enough approximations to work well. This initial study compares the three modes of presentation in terms of the ease with which a singer can obtain a desired pattern and recognize its shape.

  14. Robust inter-beat interval estimation in cardiac vibration signals.

    PubMed

    Brüser, C; Winter, S; Leonhardt, S

    2013-02-01

    Reliable and accurate estimation of instantaneous frequencies of physiological rhythms, such as heart rate, is critical for many healthcare applications. Robust estimation is especially challenging when novel unobtrusive sensors are used for continuous health monitoring in uncontrolled environments, because these sensors can create significant amounts of potentially unreliable data. We propose a new flexible algorithm for the robust estimation of local (beat-to-beat) intervals from cardiac vibration signals, specifically ballistocardiograms (BCGs), recorded by an unobtrusive bed-mounted sensor. This sensor allows the measurement of motions of the body which are caused by cardiac activity. Our method requires neither a training phase nor any prior knowledge about the morphology of the heart beats in the analyzed waveforms. Instead, three short-time estimators are combined using a Bayesian approach to continuously estimate the inter-beat intervals. We have validated our method on over-night BCG recordings from 33 subjects (8 normal, 25 insomniacs). On this dataset, containing approximately one million heart beats, our method achieved a mean beat-to-beat interval error of 0.78% with a coverage of 72.69%.

  15. Approximate Green's function methods for HZE transport in multilayered materials

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

  16. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  17. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  18. Application of Interval Analysis to Error Control.

    DTIC Science & Technology

    1976-09-01

    We give simple examples of ways in which interval arithmetic can be used to alert instabilities in computer algorithms , roundoff error accumulation, and even the effects of hardware inadequacies. This paper is primarily tutorial. (Author)

  19. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  20. A robust measure of food web intervality

    PubMed Central

    Stouffer, Daniel B.; Camacho, Juan; Amaral, Luís A. Nunes

    2006-01-01

    Intervality of a food web is related to the number of trophic dimensions characterizing the niches in a community. We introduce here a mathematically robust measure for food web intervality. It has previously been noted that empirical food webs are not strictly interval; however, upon comparison to suitable null hypotheses, we conclude that empirical food webs actually do exhibit a strong bias toward contiguity of prey, that is, toward intervality. Further, our results strongly suggest that empirically observed species and their diets can be mapped onto a single dimension. This finding validates a critical assumption in the recently proposed static niche model and provides guidance for ongoing efforts to develop dynamic models of ecosystems. PMID:17146055

  1. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  2. Interval and contour processing in autism.

    PubMed

    Heaton, Pamela

    2005-12-01

    High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group differences emerged. These findings confirm earlier studies showing facilitated pitch processing and a preserved ability to represent small-scale musical structures in autism.

  3. Periodicity In The Intervals Between Primes

    DTIC Science & Technology

    2015-07-02

    statistically strong periodicity is identified in the counting function giving the total number of intervals of a certain size. The nature of the periodic...positive intervals among the first n<=10^6 prime numbers as a probe of the global nature of the sequence of primes. A statistically strong periodicity is...Let x = x1, x2, . . . be an increasing sequence of real numbers which may be either finite or infinitely long. Throughout the following every bold

  4. Non-ideal boson system in the Gaussian approximation

    SciTech Connect

    Tommasini, P.R.; de Toledo Piza, A.F.

    1997-01-01

    We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

  5. Approximating the Helium Wavefunction in Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, Joseph; Drachman, Richard J.

    2003-01-01

    In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

  6. Approximate Approaches to the One-Dimensional Finite Potential Well

    ERIC Educational Resources Information Center

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

    2011-01-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

  7. Analytic Approximations for the Extrapolation of Lattice Data

    SciTech Connect

    Masjuan, Pere

    2010-12-22

    We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.

  8. Compressive Imaging via Approximate Message Passing

    DTIC Science & Technology

    2015-09-04

    We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

  9. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  10. Assessment of Interval Data and Their Potential Application to Residential Electricity End-Use Modeling, An

    EIA Publications

    2015-01-01

    The Energy Information Administration (EIA) is investigating the potential benefits of incorporating interval electricity data into its residential energy end use models. This includes interval smart meter and submeter data from utility assets and systems. It is expected that these data will play a significant role in informing residential energy efficiency policies in the future. Therefore, a long-term strategy for improving the RECS end-use models will not be complete without an investigation of the current state of affairs of submeter data, including their potential for use in the context of residential building energy modeling.

  11. Probability Distribution for Flowing Interval Spacing

    SciTech Connect

    S. Kuzio

    2004-09-22

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  12. Multifactor analysis of multiscaling in volatility return intervals

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10<⟨τ⟩⩽100 by a power law, μm˜⟨τ⟩δ . The exponent δ is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of γ . Moreover, we show that δ decreases with increasing γ approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.

  13. Approximate von Neumann entropy for directed graphs.

    PubMed

    Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

    2014-05-01

    In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks.

  14. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    SciTech Connect

    Hirabayashi, K.; Hoshino, M.

    2013-11-15

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  15. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    SciTech Connect

    Zillich, Robert E.

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbatively to a 2nd-order simulation.

  16. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  17. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  18. Experimental Design for Stochastic Models of Nonlinear Signaling Pathways Using an Interval-Wise Linear Noise Approximation and State Estimation

    PubMed Central

    Zimmer, Christoph

    2016-01-01

    Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802

  19. VERIFICATION OF THE INL/COMBINE7 NEUTRON ENERGY SPECTRUM CODE

    SciTech Connect

    Barry D. Ganapol; Woo Y. Yoon; David W. Nigg

    2008-09-01

    We construct semi-analytic benchmarks for the neutron slowing down equations in the thermal, resonance and fast energy regimes through mathematical embedding. The method features a fictitious time-dependent slowing down equations solved via Taylor series expansion over discrete “time” intervals. Two classes of benchmarks are considered- the first treats methods of solution and the second the multigroup approximation itself. We present several meaningful benchmark methods comparisons with the COMBINE7 energy spectrum code and a simple demonstration of convergence of the multigroup approximation.

  20. Confidence interval construction for proportion difference in small-sample paired studies.

    PubMed

    Tang, Man-Lai; Tang, Nian-Sheng; Chan, Ivan S F

    2005-12-15

    Paired dichotomous data may arise in clinical trials such as pre-/post-test comparison studies and equivalence trials. Reporting parameter estimates (e.g. odds ratio, rate difference and rate ratio) along with their associated confidence interval estimates becomes a necessity in many medical journals. Various asymptotic confidence interval estimators have long been developed for differences in correlated binary proportions. Nevertheless, the performance of these asymptotic methods may have poor coverage properties in small samples. In this article, we investigate several alternative confidence interval estimators for the difference between binomial proportions based on small-sample paired data. Specifically, we consider exact and approximate unconditional confidence intervals for rate difference via inverting a score test. The exact unconditional confidence interval guarantees the coverage probability, and it is recommended if strict control of coverage probability is required. However, the exact method tends to be overly conservative and computationally demanding. Our empirical results show that the approximate unconditional score confidence interval estimators based on inverting the score test demonstrate reasonably good coverage properties even in small-sample designs, and yet they are relatively easy to implement computationally. We illustrate the methods using real examples from a pain management study and a cancer study.

  1. The effect of inter-set rest intervals on resistance exercise-induced muscle hypertrophy.

    PubMed

    Henselmans, Menno; Schoenfeld, Brad J

    2014-12-01

    Due to a scarcity of longitudinal trials directly measuring changes in muscle girth, previous recommendations for inter-set rest intervals in resistance training programs designed to stimulate muscular hypertrophy were primarily based on the post-exercise endocrinological response and other mechanisms theoretically related to muscle growth. New research regarding the effects of inter-set rest interval manipulation on resistance training-induced muscular hypertrophy is reviewed here to evaluate current practices and provide directions for future research. Of the studies measuring long-term muscle hypertrophy in groups employing different rest intervals, none have found superior muscle growth in the shorter compared with the longer rest interval group and one study has found the opposite. Rest intervals less than 1 minute can result in acute increases in serum growth hormone levels and these rest intervals also decrease the serum testosterone to cortisol ratio. Long-term adaptations may abate the post-exercise endocrinological response and the relationship between the transient change in hormonal production and chronic muscular hypertrophy is highly contentious and appears to be weak. The relationship between the rest interval-mediated effect on immune system response, muscle damage, metabolic stress, or energy production capacity and muscle hypertrophy is still ambiguous and largely theoretical. In conclusion, the literature does not support the hypothesis that training for muscle hypertrophy requires shorter rest intervals than training for strength development or that predetermined rest intervals are preferable to auto-regulated rest periods in this regard.

  2. Sunspot Time Series: Passive and Active Intervals

    NASA Astrophysics Data System (ADS)

    Zięba, S.; Nieckarz, Z.

    2014-07-01

    Solar activity slowly and irregularly decreases from the first spotless day (FSD) in the declining phase of the old sunspot cycle and systematically, but also in an irregular way, increases to the new cycle maximum after the last spotless day (LSD). The time interval between the first and the last spotless day can be called the passive interval (PI), while the time interval from the last spotless day to the first one after the new cycle maximum is the related active interval (AI). Minima of solar cycles are inside PIs, while maxima are inside AIs. In this article, we study the properties of passive and active intervals to determine the relation between them. We have found that some properties of PIs, and related AIs, differ significantly between two group of solar cycles; this has allowed us to classify Cycles 8 - 15 as passive cycles, and Cycles 17 - 23 as active ones. We conclude that the solar activity in the PI declining phase (a descending phase of the previous cycle) determines the strength of the approaching maximum in the case of active cycles, while the activity of the PI rising phase (a phase of the ongoing cycle early growth) determines the strength of passive cycles. This can have implications for solar dynamo models. Our approach indicates the important role of solar activity during the declining and the rising phases of the solar-cycle minimum.

  3. Approximate approaches to the one-dimensional finite potential well

    NASA Astrophysics Data System (ADS)

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

    2011-11-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (mi) is taken to be distinct from mass outside (mo). A relevant parameter is the mass discontinuity ratio β = mi/mo. To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σl = 2moV0L2/planck2 (or σ = β2σl for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E~1/Lγ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.

  4. On the Effective Construction of Compactly Supported Wavelets Satisfying Homogenous Boundary Conditions on the Interval

    NASA Technical Reports Server (NTRS)

    Chiavassa, G.; Liandrat, J.

    1996-01-01

    We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.

  5. The Distribution of Phonated Intervals in the Speech of Individuals Who Stutter

    ERIC Educational Resources Information Center

    Godinho, Tara; Ingham, Roger J.; Davidow, Jason; Cotton, John

    2006-01-01

    Purpose: Previous research has demonstrated the fluency-improving effect of reducing the occurrence of short-duration, phonated intervals (PIs; approximately 30-150 ms) in individuals who stutter, prompting the hypothesis that PIs in these individuals' speech are not distributed normally, particularly in the short PI ranges. It has also been…

  6. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  7. Revised Thomas-Fermi approximation for singular potentials

    NASA Astrophysics Data System (ADS)

    Dufty, James W.; Trickey, S. B.

    2016-08-01

    Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.

  8. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-08

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy.

  9. An Approximation to the True Ability Distribution in the Binomial Error Model and Applications. Research Memorandum 79-5.

    ERIC Educational Resources Information Center

    Huynh, Huynh; Mandeville, Garrett K.

    Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…

  10. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  11. Superfluidity of heated Fermi systems in the static fluctuation approximation

    SciTech Connect

    Khamzin, A. A.; Nikitin, A. S.; Sitdikov, A. S.

    2015-10-15

    Superfluidity properties of heated finite Fermi systems are studied in the static fluctuation approximation, which is an original method. This method relies on a single and controlled approximation, which permits taking correctly into account quasiparticle correlations and thereby going beyond the independent-quasiparticle model. A closed self-consistent set of equations for calculating correlation functions at finite temperature is obtained for a finite Fermi system described by the Bardeen–Cooper–Schrieffer Hamiltonian. An equation for the energy gap is found with allowance for fluctuation effects. It is shown that the phase transition to the supefluid state is smeared upon the inclusion of fluctuations.

  12. Observation and Structure Determination of an Oxide Quasicrystal Approximant

    NASA Astrophysics Data System (ADS)

    Förster, S.; Trautmann, M.; Roy, S.; Adeagbo, W. A.; Zollner, E. M.; Hammer, R.; Schumann, F. O.; Meinel, K.; Nayak, S. K.; Mohseni, K.; Hergert, W.; Meyerheim, H. L.; Widdra, W.

    2016-08-01

    We report on the first observation of an approximant structure to the recently discovered two-dimensional oxide quasicrystal. Using scanning tunneling microscopy, low-energy electron diffraction, and surface x-ray diffraction in combination with ab initio calculations, the atomic structure and the bonding scheme are determined. The oxide approximant follows a 32 .4.3.4 Archimedean tiling. Ti atoms reside at the corners of each tiling element and are threefold coordinated to oxygen atoms. Ba atoms separate the TiO3 clusters, leading to a fundamental edge length of the tiling 6.7 Å.

  13. Plasmon Pole Approximations within a GW Sternheimer implementation

    NASA Astrophysics Data System (ADS)

    Gosselin, Vincent; Cote, Michel

    We use an implementation of the GW approximation that exploits a Sternheimer equation and a Lanczos procedure to circumvent the resource intensive sum over all bands and inversion of the dielectric matrix. I will present further improvement of the method that uses Plasmon Pole approximations to evaluate the integral over all frequencies analytically. A comparison study between the von Linden-Horsh and Engel-Farid approaches for energy levels of various molecules along with benchmarking of the computational ressources needed by the method will be discussed.

  14. Compressibility Corrections to Closure Approximations for Turbulent Flow Simulations

    SciTech Connect

    Cloutman, L D

    2003-02-01

    We summarize some modifications to the usual closure approximations for statistical models of turbulence that are necessary for use with compressible fluids at all Mach numbers. We concentrate here on the gradient-flu approximation for the turbulent heat flux, on the buoyancy production of turbulence kinetic energy, and on a modification of the Smagorinsky model to include buoyancy. In all cases, there are pressure gradient terms that do not appear in the incompressible models and are usually omitted in compressible-flow models. Omission of these terms allows unphysical rates of entropy change.

  15. Uniform approximation of partial sums of a Dirichlet series by shorter sums and {Phi}-widths

    SciTech Connect

    Bourgain, Jean; Kashin, Boris S

    2012-12-31

    It is shown that each Dirichlet polynomial P of degree N which is bounded in a certain natural Euclidean norm, admits a nontrivial uniform approximation on the corresponding interval on the real axis by a Dirichlet polynomial with spectrum containing significantly fewer than N elements. Moreover, this spectrum is independent of P. Bibliography: 19 titles.

  16. The JWKB approximation in loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Craig, David; Singh, Parampreet

    2017-01-01

    We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.

  17. Approximate dynamic model of a turbojet engine

    NASA Technical Reports Server (NTRS)

    Artemov, O. A.

    1978-01-01

    An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.

  18. Bent approximations to synchrotron radiation optics

    SciTech Connect

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.

  19. Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator

    PubMed Central

    Hwang, Ji-Hwan; Kang, Young-Chang

    2017-01-01

    In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC) for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL) control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well. PMID:28280505

  20. An efficient hybrid reliability analysis method with random and interval variables

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2016-09-01

    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  1. Interval Throwing and Hitting Programs in Baseball: Biomechanics and Rehabilitation.

    PubMed

    Chang, Edward S; Bishop, Meghan E; Baker, Dylan; West, Robin V

    2016-01-01

    Baseball injuries from throwing and hitting generally occur as a consequence of the repetitive and high-energy motions inherent to the sport. Biomechanical studies have contributed to understanding the pathomechanics leading to injury and to the development of rehabilitation programs. Interval-based throwing and hitting programs are designed to return an athlete to competition through a gradual progression of sport-specific exercises. Proper warm-up and strict adherence to the program allows the athlete to return as quickly and safely as possible.

  2. Approximation to the Probability Density at the Output of a Photmultiplier Tube

    NASA Technical Reports Server (NTRS)

    Stokey, R. J.; Lee, P. J.

    1983-01-01

    The probability density of the integrated output of a photomultiplier tube (PMT) is approximated by the Gaussian, Rayleigh, and Gamma probability densities. The accuracy of the approximations depends on the signal energy alpha: the Gamma distribution is accurate for all alpha, the Raleigh distribution is accurate for small alpha (approximate or less than 1 photon) and the Gaussian distribution is accurate for large alpha (approximate or greater than 10 photons).

  3. Output Stream of Leaky Integrate-and-Fire Neuron Without Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander K.

    2017-01-01

    Probability density function (pdf) of output interspike intervals (ISI) as well as mean ISI is found in exact form for leaky integrate-and-fire (LIF) neuron stimulated with Poisson stream. The diffusion approximation is not used. The whole range of possible ISI values is represented as infinite union of disjoint intervals: ]0;∞ [ = ]0;T_2] + sum _{m=0}^∞ ]T_2+m T_3;T_2+(m+1)T_3], where T_2 and T_3 are defined by the LIF's physical parameters. Exact expression for the obtained pdf is different on different intervals and is given as finite sum of multiple integrals. For the first three intervals the integrals are taken which brings about exact expressions with polylogarithm functions. The found distribution can be bimodal for some values of parameters. Conditions, which ensure bimodality are briefly analyzed.

  4. Confidence intervals for a crop yield-loss function in nonlinear regression

    SciTech Connect

    Lee, E.H.; Tingey, D.T.; Hogsett, W.E.

    1990-01-01

    Quantifying the relationship between chronic pollutant exposure and the ensuring biological response requires consideration of nonlinear functions that are flexible enough to generate a wide range of response curves. The linear approximation interval estimates for ozone-induced relative crop yield loss are sensitive to parameter curvature effects in nonlinear regression. The adequacy of Wald's confidence interval for proportional response is studied using the nonlinearity measures proposed by Bates and Watts (1980), Cook and Goldberg (1986), and Clarke (1987a b) and the profile t plots of Bates and Watts (1988). Numerical examples comparing Wald's, likelihood ratio, the bootstrap, and Clarke's adjusted 95% confidence intervals for relative crop yield loss are presented for a number of ozone exposure studies conducted by the National Crop Loss Assessment Network (NCLAN) program. At ambient levels of ozone concentration, the effects of nonlinearity were significant and invalidated the adequacy of Wald's confidence interval. Depending upon the severity of the curvature effects, an alternative interval (i.e., Clarke's adjustment to Wald's interval or the likelihood ratio interval) for proportional yield loss should be considered.

  5. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

  6. Happiness Scale Interval Study. Methodological Considerations

    ERIC Educational Resources Information Center

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous…

  7. Precise Interval Timer for Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey (Inventor)

    2014-01-01

    A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

  8. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Portability § 52.35 Porting Intervals. (a) All telecommunications carriers required by the Commission to port telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... p.m. local time for a simple port request to be eligible for activation at midnight on the same...

  9. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Portability § 52.35 Porting Intervals. (a) All telecommunications carriers required by the Commission to port telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... p.m. local time for a simple port request to be eligible for activation at midnight on the same...

  10. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Portability § 52.35 Porting Intervals. (a) All telecommunications carriers required by the Commission to port telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... p.m. local time for a simple port request to be eligible for activation at midnight on the same...

  11. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Portability § 52.35 Porting Intervals. (a) All telecommunications carriers required by the Commission to port telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... p.m. local time for a simple port request to be eligible for activation at midnight on the same...

  12. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Portability § 52.35 Porting Intervals. (a) All telecommunications carriers required by the Commission to port telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... p.m. local time for a simple port request to be eligible for activation at midnight on the same...

  13. Interval coding. II. Dendrite-dependent mechanisms.

    PubMed

    Doiron, Brent; Oswald, Anne-Marie M; Maler, Leonard

    2007-04-01

    The rich temporal structure of neural spike trains provides multiple dimensions to code dynamic stimuli. Popular examples are spike trains from sensory cells where bursts and isolated spikes can serve distinct coding roles. In contrast to analyses of neural coding, the cellular mechanics of burst mechanisms are typically elucidated from the neural response to static input. Bridging the mechanics of bursting with coding of dynamic stimuli is an important step in establishing theories of neural coding. Electrosensory lateral line lobe (ELL) pyramidal neurons respond to static inputs with a complex dendrite-dependent burst mechanism. Here we show that in response to dynamic broadband stimuli, these bursts lack some of the electrophysiological characteristics observed in response to static inputs. A simple leaky integrate-and-fire (LIF)-style model with a dendrite-dependent depolarizing afterpotential (DAP) is sufficient to match both the output statistics and coding performance of experimental spike trains. We use this model to investigate a simplification of interval coding where the burst interspike interval (ISI) codes for the scale of a canonical upstroke rather than a multidimensional stimulus feature. Using this stimulus reduction, we compute a quantization of the burst ISIs and the upstroke scale to show that the mutual information rate of the interval code is maximized at a moderate DAP amplitude. The combination of a reduced description of ELL pyramidal cell bursting and a simplification of the interval code increases the generality of ELL burst codes to other sensory modalities.

  14. MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION

    EPA Science Inventory

    Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...

  15. Confidence Trick: The Interpretation of Confidence Intervals

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    The frequent misinterpretation of the nature of confidence intervals by students has been well documented. This article examines the problem as an aspect of the learning of mathematical definitions and considers the tension between parroting mathematically rigorous, but essentially uninternalized, statements on the one hand and expressing…

  16. Interval scanning photomicrography of microbial cell populations.

    NASA Technical Reports Server (NTRS)

    Casida, L. E., Jr.

    1972-01-01

    A single reproducible area of the preparation in a fixed focal plane is photographically scanned at intervals during incubation. The procedure can be used for evaluating the aerobic or anaerobic growth of many microbial cells simultaneously within a population. In addition, the microscope is not restricted to the viewing of any one microculture preparation, since the slide cultures are incubated separately from the microscope.

  17. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  18. Equidistant Intervals in Perspective Photographs and Paintings

    PubMed Central

    2016-01-01

    Human vision is extremely sensitive to equidistance of spatial intervals in the frontal plane. Thresholds for spatial equidistance have been extensively measured in bisecting tasks. Despite the vast number of studies, the informational basis for equidistance perception is unknown. There are three possible sources of information for spatial equidistance in pictures, namely, distances in the picture plane, in physical space, and visual space. For each source, equidistant intervals were computed for perspective photographs of walls and canals. Intervals appear equidistant if equidistance is defined in visual space. Equidistance was further investigated in paintings of perspective scenes. In appraisals of the perspective skill of painters, emphasis has been on accurate use of vanishing points. The current study investigated the skill of painters to depict equidistant intervals. Depicted rows of equidistant columns, tiles, tapestries, or trees were analyzed in 30 paintings and engravings. Computational analysis shows that from the middle ages until now, artists either represented equidistance in physical space or in a visual space of very limited depth. Among the painters and engravers who depict equidistance in a highly nonveridical visual space are renowned experts of linear perspective. PMID:27698983

  19. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  20. Physiological adjustments to intensive interval treadmill training

    PubMed Central

    Pyke, F. S.; Elliott, B. C.; Morton, A. R.; Roberts, A. D.

    1974-01-01

    During a one month training period, eight active men, aged 23-35 years, completed sixteen 30 minute sessions of high intensity interval (5 second work bouts at 16.9 km/hr up 20-25% grade alternated with 10 second rest intervals) treadmill work. In this training period, V̇O2, V̇E and blood lactate in a 10 minute run at 12.9 km/hr on a level treadmill were unchanged but heart rate during this work decreased by an average of 9 beats/min. During a 4 minute interval work effort at the training intensity, blood lactate accumulation decreased by 40.4%. In exhausting work, mean values of V̇O2, V̇E and blood lactate increased by 6.2%, 8.2% and 31.6% respectively. Maximal heart rate decreased by an average of 4 beats/min. The average work production of the men in the training sessions improved by 64.5% from 28,160 kgm to 43,685 kgm. No significant improvements were observed in either a short sprint or a stair climbing test which assessed the ability to generate mechanical power from alactacid anaerobic sources. It was concluded that the training regime is an effective method of producing a high total work output in competitive athletes and results in improvements in aerobic power, glycolytic capacity and ability to tolerate the short duration interval work encountered in many games.

  1. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  2. Exact and Asymptotic Weighted Logrank Tests for Interval Censored Data: The interval R package

    PubMed Central

    Fay, Michael P.; Shaw, Pamela A.

    2014-01-01

    For right-censored data perhaps the most commonly used tests are weighted logrank tests, such as the logrank and Wilcoxon-type tests. In this paper we review several generalizations of those weighted logrank tests to interval-censored data and present an R package, interval, to implement many of them. The interval package depends on the perm package, also presented here, which performs exact and asymptotic linear permutation tests. The perm package performs many of the tests included in the already available coin package, and provides an independent validation of coin. We review analysis methods for interval-censored data, and we describe and show how to use the interval and perm packages. PMID:25285054

  3. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  4. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  5. Approximate methods for equations of incompressible fluid

    NASA Astrophysics Data System (ADS)

    Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.

    2017-02-01

    Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.

  6. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  7. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  8. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  9. A Robust Confidence Interval for Samples of Five Observations.

    DTIC Science & Technology

    1979-11-01

    A robust confidence interval using biweights for the case of five observations is proposed when the underlying distribution has somewhat heavier...probabilities, the intervals proposed are highly efficient, in terms of the expected length of the confidence interval . (Author)

  10. Algebraic approximations for transcendental equations with applications in nanophysics

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2015-09-01

    Using algebraic approximations of trigonometric or hyperbolic functions, a class of transcendental equations can be transformed in tractable, algebraic equations. Studying transcendental equations this way gives the eigenvalues of Sturm-Liouville problems associated to wave equation, mainly to Schroedinger equation; these algebraic approximations provide approximate analytical expressions for the energy of electrons and phonons in quantum wells, quantum dots (QDs) and quantum wires, in the frame of one-particle models of such systems. The advantage of this approach, compared to the numerical calculations, is that the final result preserves the functional dependence on the physical parameters of the problem. The errors of this method, situated between some few percentages and ?, are carefully analysed. Several applications, for quantum wells, QDs and quantum wires, are presented.

  11. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  12. Jacobian transformed and detailed balance approximations for photon induced scattering

    NASA Astrophysics Data System (ADS)

    Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.

    2012-01-01

    Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D

  13. Exploring the Random Phase Approximately for materials chemistry and physics

    SciTech Connect

    Ruzsinsky, Adrienn

    2015-03-23

    This proposal focuses on improved accuracy for the delicate energy differences of interest in materials chemistry with the fully nonlocal random phase approximation (RPA) in a density functional context. Could RPA or RPA-like approaches become standard methods of first-principles electronic-structure calculation for atoms, molecules, solids, surfaces, and nano-structures? Direct RPA includes the full exact exchange energy and a nonlocal correlation energy from the occupied and unoccupied Kohn-Sham orbitals and orbital energies, with an approximate but universal description of long-range van der Waals attraction. RPA also improves upon simple pair-wise interaction potentials or vdW density functional theory. This improvement is essential to capture accurate energy differences in metals and different phases of semiconductors. The applications in this proposal are challenges for the simpler approximations of Kohn-Sham density functional theory, which are part of the current “standard model” for quantum chemistry and condensed matter physics. Within this project we already applied RPA on different structural phase transitions on semiconductors, metals and molecules. Although RPA predicts accurate structural parameters, RPA has proven not equally accurate in all kinds of structural phase transitions. Therefore a correction to RPA can be necessary in many cases. We are currently implementing and testing a nonempirical, spatially nonlocal, frequency-dependent model for the exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation context. This kernel predicts a nearly-exact correlation energy for the electron gas of uniform density. If RPA or RPA-like approaches prove to be reliably accurate, then expected increases in computer power may make them standard in the electronic-structure calculations of the future.

  14. Probabilistic flood forecast: Exact and approximate predictive distributions

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman

    2014-09-01

    For quantification of predictive uncertainty at the forecast time t0, the future hydrograph is viewed as a discrete-time continuous-state stochastic process {Hn: n=1,…,N}, where Hn is the river stage at time instance tn>t0. The probabilistic flood forecast (PFF) should specify a sequence of exceedance functions {F‾n: n=1,…,N} such that F‾n(h)=P(Zn>h), where P stands for probability, and Zn is the maximum river stage within time interval (t0,tn], practically Zn=max{H1,…,Hn}. This article presents a method for deriving the exact PFF from a probabilistic stage transition forecast (PSTF) produced by the Bayesian forecasting system (BFS). It then recalls (i) the bounds on F‾n, which can be derived cheaply from a probabilistic river stage forecast (PRSF) produced by a simpler version of the BFS, and (ii) an approximation to F‾n, which can be constructed from the bounds via a recursive linear interpolator (RLI) without information about the stochastic dependence in the process {H1,…,Hn}, as this information is not provided by the PRSF. The RLI is substantiated by comparing the approximate PFF against the exact PFF. Being reasonably accurate and very simple, the RLI may be attractive for real-time flood forecasting in systems of lesser complexity. All methods are illustrated with a case study for a 1430 km headwater basin wherein the PFF is produced for a 72-h interval discretized into 6-h steps.

  15. Properties of strongly dipolar Bose gases beyond the Born approximation

    NASA Astrophysics Data System (ADS)

    Ołdziejewski, Rafał; Jachymski, Krzysztof

    2016-12-01

    Strongly dipolar Bose gases can form liquid droplets stabilized by quantum fluctuations. In a theoretical description of this phenomenon, the low-energy scattering amplitude is utilized as an effective potential. We show that for magnetic atoms, corrections with respect to the Born approximation arise, and we derive a modified pseudopotential using a realistic interaction model. We discuss the resulting changes in collective mode frequencies and droplet stability diagrams. Our results are relevant to recent experiments with erbium and dysprosium atoms.

  16. Approximation of the Garrett-Munk internal wave spectrum

    NASA Astrophysics Data System (ADS)

    Ibragimov, Ranis N.; Vatchev, Vesselin

    2011-12-01

    The spectral models of Garrett and Munk (1972, 1975) continue to be a useful description of the oceanic energy spectrum. However, there are several ambiguities (many of them are summarized, for example, in Levine, 2002) that make it difficult to use e.g., in a dissipation modeling (e.g., Hibiya et al., 1996, and Winters and D'Asaro, 1997). An approximate spectral formulation is presented in this work by means of the modified Running Median Methods.

  17. Multigroup Free-atom Doppler-broadening Approximation. Experiment

    SciTech Connect

    Gray, Mark Girard

    2015-11-06

    The multigroup energy Doppler-broadening approximation agrees with continuous energy Dopplerbroadening generally to within ten percent for the total cross sections of 1H, 56Fe, and 235U at 250 lanl. Although this is probably not good enough for broadening from room temperature through the entire temperature range in production use, it is better than any interpolation scheme between temperatures proposed to date, and may be good enough for extrapolation from high temperatures. The method deserves further study since additional improvements are possible.

  18. Slowly rotating scalar field wormholes: The second order approximation

    SciTech Connect

    Kashargin, P. E.; Sushkov, S. V.

    2008-09-15

    We discuss rotating wormholes in general relativity with a scalar field with negative kinetic energy. To solve the problem, we use the assumption about slow rotation. The role of a small dimensionless parameter plays the ratio of the linear velocity of rotation of the wormhole's throat and the velocity of light. We construct the rotating wormhole solution in the second-order approximation with respect to the small parameter. The analysis shows that the asymptotical mass of the rotating wormhole is greater than that of the nonrotating one, and the null energy condition violation in the rotating wormhole spacetime is weaker than that in the nonrotating one.

  19. Compton scattering from positronium and validity of the impulse approximation

    SciTech Connect

    Kaliman, Z.; Pisk, K.; Pratt, R. H.

    2011-05-15

    The cross sections for Compton scattering from positronium are calculated in the range from 1 to 100 keV incident photon energy. The calculations are based on the A{sup 2} term of the photon-electron or photon-positron interaction. Unlike in hydrogen, the scattering occurs from two centers and the interference effect plays an important role for energies below 8 keV. Because of the interference, the criterion for validity of the impulse approximation for positronium is more restrictive compared to that for hydrogen.

  20. Dynamical exchange-correlation potentials beyond the local density approximation

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Vignale, Giovanni

    2006-03-01

    Approximations for the static exchange-correlation (xc) potential of density functional theory (DFT) have reached a high level of sophistication. By contrast, time-dependent xc potentials are still being treated in a local (although velocity-dependent) approximation [G. Vignale, C. A. Ullrich and S. Conti, PRL 79, 4879 (1997)]. Unfortunately, one of the assumptions upon which the dynamical local approximation is based appears to break down in the important case of d.c. transport. Here we propose a new approximation scheme, which should allow a more accurate treatment of molecular transport problems. As a first step, we separate the exact adiabatic xc potential, which has the same form as in the static theory and can be treated by a generalized gradient approximation (GGA) or a meta-GGA. In the second step, we express the high-frequency limit of the xc stress tensor (whose divergence gives the xc force density) in terms of the exact static xc energy functional. Finally, we develop a perturbative scheme for the calculation of the frequency dependence of the xc stress tensor in terms of the ground-state Kohn-Sham orbitals and eigenvalues.

  1. A quantum relaxation-time approximation for finite fermion systems

    SciTech Connect

    Reinhard, P.-G.; Suraud, E.

    2015-03-15

    We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.

  2. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  3. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  4. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.

  5. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  6. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft

  7. The Rotator Interval of the Shoulder

    PubMed Central

    Frank, Rachel M.; Taylor, Dean; Verma, Nikhil N.; Romeo, Anthony A.; Mologne, Timothy S.; Provencher, Matthew T.

    2015-01-01

    Biomechanical studies have shown that repair or plication of rotator interval (RI) ligamentous and capsular structures decreases glenohumeral joint laxity in various directions. Clinical outcomes studies have reported successful outcomes after repair or plication of these structures in patients undergoing shoulder stabilization procedures. Recent studies describing arthroscopic techniques to address these structures have intensified the debate over the potential benefit of these procedures as well as highlighted the differences between open and arthroscopic RI procedures. The purposes of this study were to review the structures of the RI and their contribution to shoulder instability, to discuss the biomechanical and clinical effects of repair or plication of rotator interval structures, and to describe the various surgical techniques used for these procedures and outcomes. PMID:26779554

  8. Constraint-based Attribute and Interval Planning

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  9. Using interval logic for order assembly

    SciTech Connect

    Cui, Z.

    1994-12-31

    Temporal logic, in particular, interval logic has been used to represent genome maps and to assist genome map constructions. However, interval logic itself appears to be limited in its expressive power because genome mapping requires various information such as partial order, distance and local orientation. In this paper, we first propose an integrated formalism based on a spatial-temporal logic where the concepts of metric information, local orientation and uncertainty are merged. Then, we present and discuss a deductive and object-oriented data model based on this formalism for a genetic deductive database, and the inference rules required. The formalism supports the maintenance of coarser knowledge of unordered, partially ordered and completely ordered genetic data in a relational hierarchy. We believe that this integrated formalism also provides a formal basis for designing a declarative query language.

  10. Reliable prediction intervals with regression neural networks.

    PubMed

    Papadopoulos, Harris; Haralambous, Haris

    2011-10-01

    This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice.

  11. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  12. Marine energy.

    PubMed

    Kerr, David

    2007-04-15

    Marine energy is renewable and carbon free and has the potential to make a significant contribution to energy supplies in the future. In the UK, tidal power barrages and wave energy could make the largest contribution, and tidal stream energy could make a smaller but still a useful contribution. This paper provides an overview of the current status and prospects for electrical generation from marine energy. It concludes that a realistic potential contribution to UK electricity supplies is approximately 80 TWh per year but that many years of development and investment will be required if this potential is to be realized.

  13. Polynomial approximations of a class of stochastic multiscale elasticity problems

    NASA Astrophysics Data System (ADS)

    Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing

    2016-06-01

    We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together

  14. Quantifying chaotic dynamics from interspike intervals

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Shihalov, G. M.

    2015-03-01

    We address the problem of characterization of chaotic dynamics at the input of a threshold device described by an integrate-and-fire (IF) or a threshold crossing (TC) model from the output sequences of interspike intervals (ISIs). We consider the conditions under which quite short sequences of spiking events provide correct identification of the dynamical regime characterized by the single positive Lyapunov exponent (LE). We discuss features of detecting the second LE for both types of the considered models of events generation.

  15. Temporal control mechanism in equaled interval tapping.

    PubMed

    Yamada, M

    1996-05-01

    Subjects who were at intermediate levels of musical performance made equaled interval tapping in several tempos. The temporal fluctuation for the tapping was observed and analysed. The power spectrum of the fluctuation showed a critical phenomenon at around a frequency which corresponds to the period of 20 taps, for all tempos and all subjects, i.e., the slope of the spectrum was flat or had a positive value in the high frequency region above the critical frequency but it increased as the frequency decreased in the low frequency region below the critical frequency. Moreover, auto-regressive models and Akaike's information criterion were introduced to determine the critical tap number. The order of the best auto-regressive model for the temporal fluctuation data was distributed around 20 taps. These results show that the memory capacity of 20 taps governs the control of equaled interval tapping. To interpret the critical phenomenon of 20 taps with the memory capacity of the short term memory, the so called magic number seven, a simple chunking assumption was introduced; subjects might have unconsciously chunked every three taps during the tapping. If the chunking assumption is true, when subjects consciously chunk every seven taps, the memory capacity of taps should shift to about 50 taps. To test if the assumption is true or not, subjects made a three-beat rhythm tapping and a seven-beat rhythm tapping with equaled intervals. As a result, the memory capacity for these accented tappings were also estimated as 20 taps. This suggests that the critical phenomenon cannot be explained by the chunking assumption and the magic number seven, rather this finding suggests that there exists a memory capacity of 20 taps and this is used for equaled interval tapping.

  16. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  17. Approximate probability distributions of the master equation.

    PubMed

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  18. AN APPROXIMATE EQUATION OF STATE OF SOLIDS.

    DTIC Science & Technology

    research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)

  19. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  20. A Survey of Techniques for Approximate Computing

    SciTech Connect

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.

  1. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  2. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  3. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  4. New Madrid seismic zone recurrence intervals

    SciTech Connect

    Schweig, E.S. Center for Earthquake Research and Information, Memphis, TN ); Ellis, M.A. )

    1993-03-01

    Frequency-magnitude relations in the New Madrid seismic zone suggest that great earthquakes should occur every 700--1,200 yrs, implying relatively high strain rates. These estimates are supported by some geological and GPS results. Recurrence intervals of this order should have produced about 50 km of strike-slip offset since Miocene time. No subsurface evidence for such large displacements is known within the seismic zone. Moreover, the irregular fault pattern forming a compressive step that one sees today is not compatible with large displacements. There are at least three possible interpretations of the observations of short recurrence intervals and high strain rates, but apparently youthful fault geometry and lack of major post-Miocene deformation. One is that the seismological and geodetic evidence are misleading. A second possibility is that activity in the region is cyclic. That is, the geological and geodetic observations that suggest relatively short recurrence intervals reflect a time of high, but geologically temporary, pore-fluid pressure. Zoback and Zoback have suggested such a model for intraplate seismicity in general. Alternatively, the New Madrid seismic zone is geologically young feature that has been active for only the last few tens of thousands of years. In support of this, observe an irregular fault geometry associated with a unstable compressive step, a series of en echelon and discontinuous lineaments that may define the position of a youthful linking fault, and the general absence of significant post-Eocene faulting or topography.

  5. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  6. Approximate String Matching with Reduced Alphabet

    NASA Astrophysics Data System (ADS)

    Salmela, Leena; Tarhio, Jorma

    We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.

  7. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  8. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  9. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  10. A dimension-wise method for the static analysis of structures with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, MengHui; Qiu, ZhiPing

    2014-10-01

    A novel method for the static analysis of structures with interval parameters under uncertain loads is proposed, which overcomes the inherent conservatism introduced by the conventional interval analysis due to ignoring the dependency phenomenon. Instead of capturing the extremum of the structural static responses in the entire space spanned by uncertain parameters, their lower and upper bounds are calculated at the minimal and maximal point vectors obtained dimension by dimension with respect to uncertain parameters based on the Legend orthogonal polynomial approximation, overcoming the potential engineering insignificance caused by the optimization strategy. After performing its theoretical analysis, both the accuracy and applicability of the proposed method are verified.

  11. Hybrid functionals and GW approximation in the FLAPW method

    NASA Astrophysics Data System (ADS)

    Friedrich, Christoph; Betzinger, Markus; Schlipf, Martin; Blügel, Stefan; Schindlmayr, Arno

    2012-07-01

    We present recent advances in numerical implementations of hybrid functionals and the GW approximation within the full-potential linearized augmented-plane-wave (FLAPW) method. The former is an approximation for the exchange-correlation contribution to the total energy functional in density-functional theory, and the latter is an approximation for the electronic self-energy in the framework of many-body perturbation theory. All implementations employ the mixed product basis, which has evolved into a versatile basis for the products of wave functions, describing the incoming and outgoing states of an electron that is scattered by interacting with another electron. It can thus be used for representing the nonlocal potential in hybrid functionals as well as the screened interaction and related quantities in GW calculations. In particular, the six-dimensional space integrals of the Hamiltonian exchange matrix elements (and exchange self-energy) decompose into sums over vector-matrix-vector products, which can be evaluated easily. The correlation part of the GW self-energy, which contains a time or frequency dependence, is calculated on the imaginary frequency axis with a subsequent analytic continuation to the real axis or, alternatively, by a direct frequency convolution of the Green function G and the dynamically screened Coulomb interaction W along a contour integration path that avoids the poles of the Green function. Hybrid-functional and GW calculations are notoriously computationally expensive. We present a number of tricks that reduce the computational cost considerably, including the use of spatial and time-reversal symmetries, modifications of the mixed product basis with the aim to optimize it for the correlation self-energy and another modification that makes the Coulomb matrix sparse, analytic expansions of the interaction potentials around the point of divergence at k = 0, and a nested density and density-matrix convergence scheme for hybrid

  12. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  13. Flood frequency analysis using multi-objective optimization based interval estimation approach

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K. S.; He, Jianxun; Tay, Joo-Hwa

    2017-02-01

    Flood frequency analysis (FFA) is a necessary tool for water resources management and water infrastructure design. Owing to the existence of variability in sample representation, distribution selection, and distribution parameter estimation, flood quantile estimation is subjected to various levels of uncertainty, which is not negligible and avoidable. Hence, alternative methods to the conventional approach of FFA are desired for quantifying the uncertainty such as in the form of prediction interval. The primary focus of the paper was to develop a novel approach to quantify and optimize the prediction interval resulted from the non-stationarity of data set, which is reflected in the distribution parameters estimated, in FFA. This paper proposed the combination of the multi-objective optimization approach and the ensemble simulation technique to determine the optimal perturbations of distribution parameters for constructing the prediction interval of flood quantiles in FFA. To demonstrate the proposed approach, annual maximum daily flow data collected from two gauge stations on the Bow River, Alberta, Canada, were used. The results suggest that the proposed method can successfully capture the uncertainty in quantile estimates qualitatively using the prediction interval, as the number of observations falling within the constructed prediction interval is approximately maximized while the prediction interval is minimized.

  14. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines

    NASA Astrophysics Data System (ADS)

    Gu, S.

    2016-08-01

    Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.

  15. Spectrally-Invariant Approximation Within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These "spectrally invariant relationships" are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in clouddominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction. and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with ID radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  16. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  17. On uniform approximation of elliptic functions by Padé approximants

    NASA Astrophysics Data System (ADS)

    Khristoforov, Denis V.

    2009-06-01

    Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.

  18. Approximation of Bivariate Functions via Smooth Extensions

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  19. Optical properties of solids within the independent-quasiparticle approximation: Dynamical effects

    NASA Astrophysics Data System (ADS)

    del Sole, R.; Girlanda, Raffaello

    1996-11-01

    The independent-quasiparticle approximation to calculating the optical properties of solids is extended to account for dynamical effects, namely, the energy dependence of the GW self-energy. We use a simple but realistic model of such energy dependence. We find that the inclusion of dynamical effects reduces considerably the calculated absorption spectrum and makes the agreement with experiment worse.

  20. Distortion properties of the interval spectrum of IPFM generated heartbeats for heart rate variability analysis.

    PubMed

    Brennan, M; Palaniswami, M; Kamen, P

    2001-11-01

    The integral pulse frequency modulation (IPFM) model converts a continuous-time signal into a modulated series of event times, often represented as a pulse train. The IPFM process is important to the field of heart rate variability (HRV) as a simple model of the sinus modulation of heart rate. In this paper, we discuss the distortion properties associated with employing the interval spectrum for the recovery of the input signal from an IPFM process's output pulse train. The results state, in particular for HRV, how precisely the interval spectrum can be used to infer the modulation signal responsible for a series of heartbeats. We have developed a detailed analytical approximation of the interval spectrum of an IPFM process with multiple sinusoids as the input signal. Employing this result, we describe the structure and the distortion of the interval spectrum. The distortion properties of the interval spectrum are investigated systematically for a pair of frequency components. The effects of linear and nonlinear distortion of the fundamentals, the overall contribution of harmonic components to the total power, the relative contribution of "folded back" power due to aliasing and the total distortion of the input spectrum are investigated. We also provide detailed comparisons between the interval spectrum and the spectrum of counts (SOC). The spectral distortion is significant enough that caution should be taken when interpreting the interval spectrum, especially for high frequencies or large modulation amplitudes. Nevertheless, the distortion levels are not significantly larger than those of the SOC. Therefore, the spectrum of intervals may be considered a viable technique that suffers more distortion than the SOC.

  1. Feedback functions for variable-interval reinforcement

    PubMed Central

    Nevin, John A.; Baum, William M.

    1980-01-01

    On a given variable-interval schedule, the average obtained rate of reinforcement depends on the average rate of responding. An expression for this feedback effect is derived from the assumptions that free-operant responding occurs in bursts with a constant tempo, alternating with periods of engagement in other activities; that the durations of bursts and other activities are exponentially distributed; and that the rates of initiating and terminating bursts are inversely related. The expression provides a satisfactory account of the data of three experiments. PMID:16812187

  2. Asymptotic Theory for Nonparametric Confidence Intervals.

    DTIC Science & Technology

    1982-07-01

    distributions. Ann. Math Statist. 14, 56-62. 24. ROY, S.N. and POTTHOFF, R.F. (1958). Confidence bounds on vector analogues of the "ratio of the mean" and...fl c,~_________ 14L TITLE feed &MV) S. TYPE or REPORT a PeftOo COVx:REC Asympeocic Theory for Nonaparuetric Technical Report Confidence Intevals 6...S..C-0S78 UNCLASSIFIED TŗU *uuuuumuuumhhhhmhhhm_4 ASYMPTOTIC THEORY FOR NONPARAMETRIC CONFIDENCE INTERVALS by Peter W. Glynn TECHNICAL REPORT NO. 63

  3. Ancilla-approximable quantum state transformations

    SciTech Connect

    Blass, Andreas; Gurevich, Yuri

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  4. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  5. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  6. Separable approximations of two-body interactions

    NASA Astrophysics Data System (ADS)

    Haidenbauer, J.; Plessas, W.

    1983-01-01

    We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.

  7. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  8. The Verification of Influence of the Point "C" Position from Given Interval to Solving Systems with Highspeed Feedback

    NASA Astrophysics Data System (ADS)

    Bajčičáková, Ingrida; Jurovatá, Dominika

    2015-08-01

    This article deals with the design of effective numerical scheme for solving three point boundary value problems for second-order nonlinear singularly perturbed differential equations with initial conditions. Especially, it is focused on the analysis of the solutions when the point c from given interval is not the centre of this interval. The obtained system of nonlinear algebraic equations is solved by Newthon-Raphson method in MATLAB. It also verifies the convergence of approximate solutions of an original problem to the solution of reduced problem. We discuss the solution of a given problem with the situation when the point c is in the middle of the given interval.

  9. Optical spectroscopies of materials from orbital-dependent approximations

    NASA Astrophysics Data System (ADS)

    Dabo, Ismaila; Ferretti, Andrea; Cococcioni, Matteo; Marzari, Nicola

    2013-03-01

    Electronic-structure calculations based upon density-functional theory (DFT) have been fruitful in diverse areas of materials science. Despite their exceptional success and widespread use, a range of spectroscopic properties fall beyond the scope of existing DFT approximations. Failures of DFT calculations in describing electronic and optical phenomena take root in the lack of piecewise linearity of approximate functionals. This known deficiency reverberates negatively on the spectroscopic description of systems involving fractionally occupied or spatially delocalized electronic states, such as donor-acceptor organic heterojunctions and heavy-metal organometallic complexes. In this talk, I will present a class of orbital-dependent density-functional theory (OD-DFT) methods that are derived from a multidensity formulation of the electronic-structure problem and that restore the piecewise linearity of the total energy via Koopmans' theorem. Such OD-DFT electronic-structure approximations are apt at describing full orbital spectra within a few tenths of an electron-volt relative to experimental photoemission spectroscopies and with the additional benefit of providing appreciably improved total energies for molecular systems with fractional occupations.

  10. Nucleon-pair approximation to the nuclear shell model

    NASA Astrophysics Data System (ADS)

    Zhao, Y. M.; Arima, A.

    2014-12-01

    Atomic nuclei are complex systems of nucleons-protons and neutrons. Nucleons interact with each other via an attractive and short-range force. This feature of the interaction leads to a pattern of dominantly monopole and quadrupole correlations between like particles (i.e., proton-proton and neutron-neutron correlations) in low-lying states of atomic nuclei. As a consequence, among dozens or even hundreds of possible types of nucleon pairs, very few nucleon pairs such as proton and neutron pairs with spin zero, two (in some cases spin four), and occasionally isoscalar spin-aligned proton-neutron pairs, play important roles in low-energy nuclear structure. The nucleon-pair approximation therefore provides us with an efficient truncation scheme of the full shell model configurations which are otherwise too large to handle for medium and heavy nuclei in foreseeable future. Furthermore, the nucleon-pair approximation leads to simple pictures in physics, as the dimension of nucleon-pair subspace is always small. The present paper aims at a sound review of its history, formulation, validity, applications, as well as its link to previous approaches, with the focus on the new developments in the last two decades. The applicability of the nucleon-pair approximation and numerical calculations of low-lying states for realistic atomic nuclei are demonstrated with examples. Applications of pair approximations to other problems are also discussed.

  11. Very fast approximate reconstruction of MR images.

    PubMed

    Angelidis, P A

    1998-11-01

    The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.

  12. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  13. Bronchopulmonary segments approximation using anatomical atlas

    NASA Astrophysics Data System (ADS)

    Busayarat, Sata; Zrimec, Tatjana

    2007-03-01

    Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.

  14. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  15. Interval estimates for closure-phase and closure-amplitude imaging in radio astronomy

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Kosheleva, Olga; Finkel'shtejn, Andrej

    1992-01-01

    Interval estimates for closure-phase and closure-amplitude imaging that enable the reconstruction of a radioimage from results of approximate measurements are presented. If the intervals for the measured values are known, the precision of the result of the reconstruction cannot be solved by standard interval methods, because the phase value is based on a circle but not on a real line. If the phase theta (x bar) is measured with precision epsilon, so that the closure phase theta (x bar) + theta (y bar) - theta (x bar + y bar) is known with precision 3 epsilon, then from these measurements theta can be reconstructed with precision 6 epsilon. Similar estimates are given for closure amplitude.

  16. Hourly Wind Speed Interval Prediction in Arid Regions

    NASA Astrophysics Data System (ADS)

    Chaouch, M.; Ouarda, T.

    2013-12-01

    The long and extended warm and dry summers, the low rate of rain and humidity are the main factors that explain the increase of electricity consumption in hot arid regions. In such regions, the ventilating and air-conditioning installations, that are typically the most energy-intensive among energy consumption activities, are essential for securing healthy, safe and suitable indoor thermal conditions for building occupants and stored materials. The use of renewable energy resources such as solar and wind represents one of the most relevant solutions to overcome the increase of the electricity demand challenge. In the recent years, wind energy is gaining more importance among the researchers worldwide. Wind energy is intermittent in nature and hence the power system scheduling and dynamic control of wind turbine requires an estimate of wind energy. Accurate forecast of wind speed is a challenging task for the wind energy research field. In fact, due to the large variability of wind speed caused by the unpredictable and dynamic nature of the earth's atmosphere, there are many fluctuations in wind power production. This inherent variability of wind speed is the main cause of the uncertainty observed in wind power generation. Furthermore, producing wind power forecasts might be obtained indirectly by modeling the wind speed series and then transforming the forecasts through a power curve. Wind speed forecasting techniques have received substantial attention recently and several models have been developed. Basically two main approaches have been proposed in the literature: (1) physical models such as Numerical Weather Forecast and (2) statistical models such as Autoregressive integrated moving average (ARIMA) models, Neural Networks. While the initial focus in the literature has been on point forecasts, the need to quantify forecast uncertainty and communicate the risk of extreme ramp events has led to an interest in producing probabilistic forecasts. In short term

  17. Sprint vs. interval training in football.

    PubMed

    Ferrari Bravo, D; Impellizzeri, F M; Rampinini, E; Castagna, C; Bishop, D; Wisloff, U

    2008-08-01

    The aim of this study was to compare the effects of high-intensity aerobic interval and repeated-sprint ability (RSA) training on aerobic and anaerobic physiological variables in male football players. Forty-two participants were randomly assigned to either the interval training group (ITG, 4 x 4 min running at 90 - 95 % of HRmax; n = 21) or repeated-sprint training group (RSG, 3 x 6 maximal shuttle sprints of 40 m; n = 21). The following outcomes were measured at baseline and after 7 weeks of training: maximum oxygen uptake, respiratory compensation point, football-specific endurance (Yo-Yo Intermittent Recovery Test, YYIRT), 10-m sprint time, jump height and power, and RSA. Significant group x time interaction was found for YYIRT (p = 0.003) with RSG showing greater improvement (from 1917 +/- 439 to 2455 +/- 488 m) than ITG (from 1846 +/- 329 to 2077 +/- 300 m). Similarly, a significant interaction was found in RSA mean time (p = 0.006) with only the RSG group showing an improvement after training (from 7.53 +/- 0.21 to 7.37 +/- 0.17 s). No other group x time interactions were found. Significant pre-post changes were found for absolute and relative maximum oxygen uptake and respiratory compensation point (p < 0.05). These findings suggest that the RSA training protocol used in this study can be an effective training strategy for inducing aerobic and football-specific training adaptations.

  18. Neurocomputational Models of Interval and Pattern Timing

    PubMed Central

    Hardy, Nicholas F.; Buonomano, Dean V.

    2016-01-01

    Most of the computations and tasks performed by the brain require the ability to tell time, and process and generate temporal patterns. Thus, there is a diverse set of neural mechanisms in place to allow the brain to tell time across a wide range of scales: from interaural delays on the order of microseconds to circadian rhythms and beyond. Temporal processing is most sophisticated on the scale of tens of milliseconds to a few seconds, because it is within this range that the brain must recognize and produce complex temporal patterns—such as those that characterize speech and music. Most models of timing, however, have focused primarily on simple intervals and durations, thus it is not clear whether they will generalize to complex pattern-based temporal tasks. Here, we review neurobiologically based models of timing in the subsecond range, focusing on whether they generalize to tasks that require placing consecutive intervals in the context of an overall pattern, that is, pattern timing. PMID:27790629

  19. [Severe craniocerebral injuries with a lucid interval].

    PubMed

    Vilalta, J; Rubio, E; Castaño, C H; Guitart, J M; Bosch, J

    1993-02-01

    Some variables were analyzed in 35 patients with severe cranioencephalic injuries following a lucid interval according to mortality. The variables analyzed were: age of less than 40 years, interval of time accident-admission (TAA), admission-operation (TAO), level of consciousness (Glasgow scale), associated extracranial lesions, type of intracranial lesion, and tomodensitometric signs of intracranial hypertension. The only variables demonstrating significant statistical differences (p < 0.05) were the level of consciousness (Glasgow scale < 6 points) and the presence of subdural hematoma. Twelve (70.5%) patients who died had less than 6 on the Glasgow scale and in contrast only 5 (27.7%) of the living. Eleven (64.7%) of the group who died and 4 (22.2%) of the living had subdural hematoma. These data suggest that the level of consciousness and the type of lesion are determining factors of the mortality in this type of patients. Early detection and energic treatment of secondary lesions contribute to prognostic improvement of cranioencephalic injuries.

  20. Strong convergence and convergence rates of approximating solutions for algebraic Riccati equations in Hilbert spaces

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1987-01-01

    The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.