Sample records for linear scaling calculation

  1. Preface: Introductory Remarks: Linear Scaling Methods

    NASA Astrophysics Data System (ADS)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085

  2. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  3. GPU implementation of the linear scaling three dimensional fragment method for large scale electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang

    2017-02-01

    LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.

  4. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  5. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  6. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  7. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  8. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  9. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  10. Linear-scaling method for calculating nuclear magnetic resonance chemical shifts using gauge-including atomic orbitals within Hartree-Fock and density-functional theory.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-08-07

    Details of a new density matrix-based formulation for calculating nuclear magnetic resonance chemical shifts at both Hartree-Fock and density functional theory levels are presented. For systems with a nonvanishing highest occupied molecular orbital-lowest unoccupied molecular orbital gap, the method allows us to reduce the asymptotic scaling order of the computational effort from cubic to linear, so that molecular systems with 1000 and more atoms can be tackled with today's computers. The key feature is a reformulation of the coupled-perturbed self-consistent field (CPSCF) theory in terms of the one-particle density matrix (D-CPSCF), which avoids entirely the use of canonical MOs. By means of a direct solution for the required perturbed density matrices and the adaptation of linear-scaling integral contraction schemes, the overall scaling of the computational effort is reduced to linear. A particular focus of our formulation is to ensure numerical stability when sparse-algebra routines are used to obtain an overall linear-scaling behavior.

  11. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  12. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  13. Fourier imaging of non-linear structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less

  14. Application of a chromatography model with linear gradient elution experimental data to the rapid scale-up in ion-exchange process chromatography of proteins.

    PubMed

    Ishihara, Takashi; Kadoya, Toshihiko; Yamamoto, Shuichi

    2007-08-24

    We applied the model described in our previous paper to the rapid scale-up in the ion exchange chromatography of proteins, in which linear flow velocity, column length and gradient slope were changed. We carried out linear gradient elution experiments, and obtained data for the peak salt concentration and peak width. From these data, the plate height (HETP) was calculated as a function of the mobile phase velocity and iso-resolution curve (the separation time and elution volume relationship for the same resolution) was calculated. The scale-up chromatography conditions were determined by the iso-resolution curve. The scale-up of the linear gradient elution from 5 to 100mL and 2.5L column sizes was performed both by the separation of beta-lactoglobulin A and beta-lactoglobulin B with anion-exchange chromatography and by the purification of a recombinant protein with cation-exchange chromatography. Resolution, recovery and purity were examined in order to verify the proposed method.

  15. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  16. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  17. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  18. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  19. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  20. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  1. Quantum corrections to the generalized Proca theory via a matter field

    NASA Astrophysics Data System (ADS)

    Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab

    2017-09-01

    We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  3. Impulse-response functions and anthropogenic CO2

    NASA Technical Reports Server (NTRS)

    Tubiello, Francesco N.; Oppenheimer, Michael

    1995-01-01

    Non-linearities in the carbon cycle make the response to atmospheric CO2 perturbations dependent on emission history. We show that even when linear representations of the carbon cycle are used, the calculation of time scales characterizing the removal of excess CO2 depends on past emissions.

  4. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  5. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  6. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  7. Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.

    PubMed

    Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas

    2011-12-16

    The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.

  8. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  9. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  10. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  11. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  12. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  13. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  14. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    NASA Astrophysics Data System (ADS)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  15. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  16. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    PubMed

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  17. The linearly scaling 3D fragment method for large scale electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak

    2009-07-28

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less

  18. The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak

    2009-06-26

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less

  19. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  20. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  1. Linear and non-linear perturbations in dark energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.

    2016-11-01

    In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.

  2. Non-Linear Cosmological Power Spectra in Real and Redshift Space

    NASA Technical Reports Server (NTRS)

    Taylor, A. N.; Hamilton, A. J. S.

    1996-01-01

    We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.

  3. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  4. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  5. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  6. Pretest Predictions for Ventilation Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. Sun; H. Yang; H.N. Kalia

    The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less

  7. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlapmore » matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.« less

  8. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings.

    PubMed

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Ångstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  9. The impact of using area-averaged land surface properties —topography, vegetation condition, soil wetness—in calculations of intermediate scale (approximately 10 km 2) surface-atmosphere heat and moisture fluxes

    NASA Astrophysics Data System (ADS)

    Sellers, Piers J.; Heiser, Mark D.; Hall, Forrest G.; Verma, Shashi B.; Desjardins, Raymond L.; Schuepp, Peter M.; Ian MacPherson, J.

    1997-03-01

    It is commonly assumed that biophysically based soil-vegetation-atmosphere transfer (SVAT) models are scale-invariant with respect to the initial boundary conditions of topography, vegetation condition and soil moisture. In practice, SVAT models that have been developed and tested at the local scale (a few meters or a few tens of meters) are applied almost unmodified within general circulation models (GCMs) of the atmosphere, which have grid areas of 50-500 km 2. This study, which draws much of its substantive material from the papers of Sellers et al. (1992c, J. Geophys. Res., 97(D17): 19033-19060) and Sellers et al. (1995, J. Geophys. Res., 100(D12): 25607-25629), explores the validity of doing this. The work makes use of the FIFE-89 data set which was collected over a 2 km × 15 km grassland area in Kansas. The site was characterized by high variability in soil moisture and vegetation condition during the late growing season of 1989. The area also has moderate topography. The 2 km × 15 km 'testbed' area was divided into 68 × 501 pixels of 30 m × 30 m spatial resolution, each of which could be assigned topographic, vegetation condition and soil moisture parameters from satellite and in situ observations gathered in FIFE-89. One or more of these surface fields was area-averaged in a series of simulation runs to determine the impact of using large-area means of these initial or boundary conditions on the area-integrated (aggregated) surface fluxes. The results of the study can be summarized as follows: 1. analyses and some of the simulations indicated that the relationships describing the effects of moderate topography on the surface radiation budget are near-linear and thus largely scale-invariant. The relationships linking the simple ratio vegetation index ( SR), the canopy conductance parameter (▽ F) and the canopy transpiration flux are also near-linear and similarly scale-invariant to first order. Because of this, it appears that simple area-averaging operations can be applied to these fields with relatively little impact on the calculated surface heat flux. 2. The relationships linking surface and root-zone soil wetness to the soil surface and canopy transpiration rates are non-linear. However, simulation results and observations indicate that soil moisture variability decreases significantly as an area dries out, which partially cancels out the effects of these non-linear functions.In conclusion, it appears that simple averages of topographic slope and vegetation parameters can be used to calculate surface energy and heat fluxes over a wide range of spatial scales, from a few meters up to many kilometers at least for grassland sites and areas with moderate topography. Although the relationships between soil moisture and evapotranspiration are non-linear for intermediate soil wetnesses, the dynamics of soil drying act to progressively reduce soil moisture variability and thus the impacts of these non-linearities on the area-averaged surface fluxes. These findings indicate that we may be able to use mean values of topography, vegetation condition and soil moisture to calculate the surface-atmosphere fluxes of energy, heat and moisture at larger length scales, to within an acceptable accuracy for climate modeling work. However, further tests over areas with different vegetation types, soils and more extreme topography are required to improve our confidence in this approach.

  10. Quantum criticality of the two-channel pseudogap Anderson model: universal scaling in linear and non-linear conductance.

    PubMed

    Wu, Tsan-Pei; Wang, Xiao-Qun; Guo, Guang-Yu; Anders, Frithjof; Chung, Chung-Hou

    2016-05-05

    The quantum criticality of the two-lead two-channel pseudogap Anderson impurity model is studied. Based on the non-crossing approximation (NCA) and numerical renormalization group (NRG) approaches, we calculate both the linear and nonlinear conductance of the model at finite temperatures with a voltage bias and a power-law vanishing conduction electron density of states, ρc(ω) proportional |ω − μF|(r) (0 < r < 1) near the Fermi energy μF. At a fixed lead-impurity hybridization, a quantum phase transition from the two-channel Kondo (2CK) to the local moment (LM) phase is observed with increasing r from r = 0 to r = rc < 1. Surprisingly, in the 2CK phase, different power-law scalings from the well-known [Formula: see text] or [Formula: see text] form is found. Moreover, novel power-law scalings in conductances at the 2CK-LM quantum critical point are identified. Clear distinctions are found on the critical exponents between linear and non-linear conductance at criticality. The implications of these two distinct quantum critical properties for the non-equilibrium quantum criticality in general are discussed.

  11. Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials

    NASA Astrophysics Data System (ADS)

    Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.

    2018-01-01

    The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.

  12. Perturbations from cosmic strings in cold dark matter

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas; Stebbins, Albert

    1992-01-01

    A systematic linear analysis of the perturbations induced by cosmic strings in cold dark matter is presented. The power spectrum is calculated and it is found that the strings produce a great deal of power on small scales. It is shown that the perturbations on interesting scales are the result of many uncorrelated string motions, which indicates a much more Gaussian distribution than was previously supposed.

  13. Perturbations from cosmic strings in cold dark matter

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas; Stebbins, Albert

    1991-01-01

    A systematic linear analysis of the perturbations induced by cosmic strings in cold dark matter is presented. The power spectrum is calculated and it is found that the strings produce a great deal of power on small scales. It is shown that the perturbations on interesting scales are the result of many uncorrelated string motions, which indicates a much more Gaussian distribution than was previously supposed.

  14. A new scale of electronegativity based on electrophilicity index.

    PubMed

    Noorizadeh, Siamak; Shakerzadeh, Ehsan

    2008-04-17

    By calculating the energies of neutral and different ionic forms (M2+, M+, M, M-, and M2-) of 32 elements (using B3LYP/6-311++G** level of theory) and taking energy (E) to be a Morse-like function of the number of electrons (N), the electrophilicity values (omega) are calculated for these atoms. The obtained electrophilicities show a good linearity with some commonly used electronegativity scales such as Pauling and Allred-Rochow. Using these electrophilicities, the ionicities of some diatomic molecules are calculated, which are in good agreement with the experimental data. Therefore, these electrophilicities are introduced as a new scale for atomic electronegativity, chi(omega)0. The same procedure is also performed for some simple polyatomic molecules. It is shown that the new scale successfully obeys Sanderson's electronegativity equalization principle and for those molecules which have the same number of atoms, the ratio of the change in electronegativity during the formation of a molecule from its elements to the molecular electronegativity (Delta chi/chi omega) is the same.

  15. Scaling of induction-cell transverse impedance: effect on accelerator design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August

    2016-08-09

    The strength of the dangerous beam breakup (BBU) instability in linear induction accelerators (LIAs) is characterized by the transverse coupling impedance Z ⊥. This note addresses the dimensional scaling of Z ⊥, which is important when comparing new LIA designs to existing accelerators with known i BBU growth. Moreover, it is shown that the scaling of Z ⊥ with the accelerating gap size relates BBU growth directly to high-voltage engineering considerations. It is proposed to firmly establish this scaling though a series of AMOS calculations.

  16. A simplified method for power-law modelling of metabolic pathways from time-course data and steady-state flux profiles.

    PubMed

    Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru

    2006-07-17

    In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.

  17. Relating Stellar Cycle Periods to Dynamo Calculations

    NASA Technical Reports Server (NTRS)

    Tobias, S. M.

    1998-01-01

    Stellar magnetic activity in slowly rotating stars is often cyclic, with the period of the magnetic cycle depending critically on the rotation rate and the convective turnover time of the star. Here we show that the interpretation of this law from dynamo models is not a simple task. It is demonstrated that the period is (unsurprisingly) sensitive to the precise type of non-linearity employed. Moreover the calculation of the wave-speed of plane-wave solutions does not (as was previously supposed) give an indication of the magnetic period in a more realistic dynamo model, as the changes in length-scale of solutions are not easily captured by this approach. Progress can be made, however, by considering a realistic two-dimensional model, in which the radial length-scale of waves is included. We show that it is possible in this case to derive a more robust relation between cycle period and dynamo number. For all the non-linearities considered in the most realistic model, the magnetic cycle period is a decreasing function of IDI (the amplitude of the dynamo number). However, discriminating between different non-linearities is difficult in this case and care must therefore be taken before advancing explanations for the magnetic periods of stars.

  18. Thermal Conductivity of Liquid Water from Reverse Nonequilibrium Ab Initio Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Tsuchida, Eiji

    2018-02-01

    We report on a theoretical framework for calculating the thermal conductivity of liquid water from first principles with the aid of the linear scaling method. We also discuss the possibility of obtaining equilibrium properties from a nonequilibrium trajectory.

  19. Linear Scaling of the Exciton Binding Energy versus the Band Gap of Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Ho; Cui, Ping; Lan, Haiping; Zhang, Zhenyu

    2015-08-01

    The exciton is one of the most crucial physical entities in the performance of optoelectronic and photonic devices, and widely varying exciton binding energies have been reported in different classes of materials. Using first-principles calculations within the G W -Bethe-Salpeter equation approach, here we investigate the excitonic properties of two recently discovered layered materials: phosphorene and graphene fluoride. We first confirm large exciton binding energies of, respectively, 0.85 and 2.03 eV in these systems. Next, by comparing these systems with several other representative two-dimensional materials, we discover a striking linear relationship between the exciton binding energy and the band gap and interpret the existence of the linear scaling law within a simple hydrogenic picture. The broad applicability of this novel scaling law is further demonstrated by using strained graphene fluoride. These findings are expected to stimulate related studies in higher and lower dimensions, potentially resulting in a deeper understanding of excitonic effects in materials of all dimensionalities.

  20. Highly parallel demagnetization field calculation using the fast multipole method on tetrahedral meshes with continuous sources

    NASA Astrophysics Data System (ADS)

    Palmesi, P.; Exl, L.; Bruckner, F.; Abert, C.; Suess, D.

    2017-11-01

    The long-range magnetic field is the most time-consuming part in micromagnetic simulations. Computational improvements can relieve problems related to this bottleneck. This work presents an efficient implementation of the Fast Multipole Method [FMM] for the magnetic scalar potential as used in micromagnetics. The novelty lies in extending FMM to linearly magnetized tetrahedral sources making it interesting also for other areas of computational physics. We treat the near field directly and in use (exact) numerical integration on the multipole expansion in the far field. This approach tackles important issues like the vectorial and continuous nature of the magnetic field. By using FMM the calculations scale linearly in time and memory.

  1. Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice

    NASA Astrophysics Data System (ADS)

    Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko

    2018-03-01

    Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.

  2. Computation of indirect nuclear spin-spin couplings with reduced complexity in pure and hybrid density functional approximations.

    PubMed

    Luenser, Arne; Kussmann, Jörg; Ochsenfeld, Christian

    2016-09-28

    We present a (sub)linear-scaling algorithm to determine indirect nuclear spin-spin coupling constants at the Hartree-Fock and Kohn-Sham density functional levels of theory. Employing efficient integral algorithms and sparse algebra routines, an overall (sub)linear scaling behavior can be obtained for systems with a non-vanishing HOMO-LUMO gap. Calculations on systems with over 1000 atoms and 20 000 basis functions illustrate the performance and accuracy of our reference implementation. Specifically, we demonstrate that linear algebra dominates the runtime of conventional algorithms for 10 000 basis functions and above. Attainable speedups of our method exceed 6 × in total runtime and 10 × in the linear algebra steps for the tested systems. Furthermore, a convergence study of spin-spin couplings of an aminopyrazole peptide upon inclusion of the water environment is presented: using the new method it is shown that large solvent spheres are necessary to converge spin-spin coupling values.

  3. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  4. Multiscale analysis of the gradient of linear polarization

    NASA Astrophysics Data System (ADS)

    Robitaille, J.-F.; Scaife, A. M. M.

    2015-07-01

    We propose a new multiscale method to calculate the amplitude of the gradient of the linear polarization vector, |∇ P|, using a wavelet-based formalism. We demonstrate this method using a field of the Canadian Galactic Plane Survey and show that the filamentary structure typically seen in |∇ P| maps depends strongly on the instrumental resolution. Our analysis reveals that different networks of filaments are present on different angular scales. The wavelet formalism allows us to calculate the power spectrum of the fluctuations seen in |∇ P| and to determine the scaling behaviour of this quantity. The power spectrum is found to follow a power law with γ ≈ 2.1. We identify a small drop in power between scales of 80 ≲ l ≲ 300 arcmin, which corresponds well to the overlap in the u-v plane between the Effelsberg 100-m telescope and the Dominion Radio Astrophysical Observatory 26-m telescope data. We suggest that this drop is due to undersampling present in the 26-m telescope data. In addition, the wavelet coefficient distributions show higher skewness on smaller scales than at larger scales. The spatial distribution of the outliers in the tails of these distributions creates a coherent subset of filaments correlated across multiple scales, which trace the sharpest changes in the polarization vector P within the field. We suggest that these structures may be associated with highly compressive shocks in the medium. The power spectrum of the field excluding these outliers shows a steeper power law with γ ≈ 2.5.

  5. Dispersion interactions with linear scaling DFT: a study of planar molecules on charged polar surfaces

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Haynes, Peter; Mostofi, Arash

    2010-03-01

    The placement of organic molecules such as CuPc (copper phthalocyanine) on wurtzite ZnO (zinc oxide) charged surfaces has been proposed as a way of creating photovoltaic solar cellsfootnotetextG.D. Sharma et al., Solar Energy Materials & Solar Cells 90, 933 (2006) ; optimising their performance may be aided by computational simulation. Electronic structure calculations provide high accuracy at modest computational cost but two challenges are encountered for such layered systems. First, the system size is at or beyond the limit of traditional cubic-scaling Density Functional Theory (DFT). Second, traditional exchange-correlation functionals do not account for van der Waals (vdW) interactions, crucial for determining the structure of weakly bonded systems. We present an implementation of recently developed approachesfootnotetextP.L. Silvestrelli, P.R.L. 100, 102 (2008) to include vdW in DFT within ONETEPfootnotetextC.-K. Skylaris, P.D. Haynes, A.A. Mostofi and M.C. Payne, J.C.P. 122, 084119 (2005) , a linear-scaling package for performing DFT calculations using a basis of localised functions. We have applied this methodology to simple planar organic molecules, such as benzene and pentacene, on ZnO surfaces.

  6. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    PubMed

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  7. Mathematical Simulation for Integrated Linear Fresnel Spectrometer Chip

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon; Yoon, Hargoon; Lee, Uhn; King, Glen C.; Choi, Sang H.

    2012-01-01

    A miniaturized solid-state optical spectrometer chip was designed with a linear gradient-gap Fresnel grating which was mounted perpendicularly to a sensor array surface and simulated for its performance and functionality. Unlike common spectrometers which are based on Fraunhoffer diffraction with a regular periodic line grating, the new linear gradient grating Fresnel spectrometer chip can be miniaturized to a much smaller form-factor into the Fresnel regime exceeding the limit of conventional spectrometers. This mathematical calculation shows that building a tiny motionless multi-pixel microspectrometer chip which is smaller than 1 cubic millimter of optical path volume is possible. The new Fresnel spectrometer chip is proportional to the energy scale (hc/lambda), while the conventional spectrometers are proportional to the wavelength scale (lambda). We report the theoretical optical working principle and new data collection algorithm of the new Fresnel spectrometer to build a compact integrated optical chip.

  8. Eigentime identities for on weighted polymer networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Tang, Hualong; Zou, Jiahui; He, Di; Sun, Yu; Su, Weiyi

    2018-01-01

    In this paper, we first analytically calculate the eigenvalues of the transition matrix of a structure with very complex architecture and their multiplicities. We call this structure polymer network. Based on the eigenvalues obtained in the iterative manner, we then calculate the eigentime identity. We highlight two scaling behaviors (logarithmic and linear) for this quantity, strongly depending on the value of the weight factor. Finally, by making use of the obtained eigenvalues, we determine the weighted counting of spanning trees.

  9. On the effects of tidal interaction on thin accretion disks: An analytic study

    NASA Technical Reports Server (NTRS)

    Dgani, R.; Livio, M.; Regev, O.

    1994-01-01

    We calculate tidal effects on two-dimensional thin accretion disks in binary systems. We apply a perturbation expansion to obtain an analytic solution of the tidally induced waves. We obtain spiral waves that are stronger at the inner parts of the disks, in addition to a local disturbance which scales like the strength of the local tidal force. Our results agree with recent calculations of the linear response of the disk to tidal interaction.

  10. Continuum calculations of continental deformation in transcurrent environments

    NASA Technical Reports Server (NTRS)

    Sonder, L. J.; England, P. C.; Houseman, G. A.

    1986-01-01

    A thin viscous sheet approximation is used to investigate continental deformation near a strike-slip boundary. The vertically averaged velocity field is calculated for a medium characterized by a power law rheology with stress exponent n. Driving stresses include those applied along boundaries of the sheet and those arising from buoyancy forces related to lateral differences in crustal thickness. Exact and approximate analytic solutions for a region with a sinusoidal strike-slip boundary condition are compared with solutions for more geologically relevant boundary conditions obtained using a finite element technique. The across-strike length scale of the deformation is approximately 1/4pi x sq rt n times the dominant wavelength of the imposed strike-slip boundary condition for both the analytic and the numerical solutions; this result is consistent with length scales observed in continental regions of large-scale transcurrent faulting. An approximate, linear relationship between displacement and rotation is found that depends only on the deformation length scale and the rheology. Calculated displacements, finite rotations, and distribution of crustal thicknesses are consistent with those observed in the region of the Pacific-North America plate boundary in California.

  11. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  12. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A three-dimensional code for viscous cascade flow prediction has been developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four-stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full-multigrid method. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large-scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  13. Local unitary transformation method toward practical electron correlation calculations with scalar relativistic effect in large-scale molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seino, Junji; Nakai, Hiromi, E-mail: nakai@waseda.jp; Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555

    In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas–Kroll–Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys.136, 244102 (2012); J. Seino and H. Nakai, J. Chem. Phys.137, 144101 (2012)], which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree–Fock (HF) and electron correlation methods, such as the second-order Møller–Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX){sub n} (X = F, Cl, Br, and I), coinage metal chain systems,more » M{sub n} (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}(C{sub 6}H{sub 5})Pt(C≡C){sub 4}Pt(C{sub 6}H{sub 5})((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.« less

  14. A global traveling wave on Venus

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Gierasch, Peter J.; Schinder, Paul J.

    1993-01-01

    The dominant large-scale pattern in the clouds of Venus has been described as a 'Y' or 'Psi' and tentatively identified by earlier workers as a Kelvin wave. A detailed calculation of linear wave modes in the Venus atmosphere verifies this identification. Cloud feedback by infrared heating fluctuations is a plausible excitation mechanism. Modulation of the large-scale pattern by the wave is a possible explanation for the Y. Momentum transfer by the wave could contribute to sustaining the general circulation.

  15. Scaling results for the magnetic field line trajectories in the stochastic layer near the separatrix in divertor tokamaks with high magnetic shear using the higher shear map

    NASA Astrophysics Data System (ADS)

    Punjabi, Alkesh; Ali, Halima; Farhat, Hamidullah

    2009-07-01

    Extra terms are added to the generating function of the simple map (Punjabi et al 1992 Phys. Rev. Lett. 69 3322) to adjust shear of magnetic field lines in divertor tokamaks. From this new generating function, a higher shear map is derived from a canonical transformation. A continuous analog of the higher shear map is also derived. The method of maps (Punjabi et al 1994 J. Plasma Phys. 52 91) is used to calculate the average shear, stochastic broadening of the ideal separatrix near the X-point in the principal plane of the tokamak, loss of poloidal magnetic flux from inside the ideal separatrix, magnetic footprint on the collector plate, and its area, and the radial diffusion coefficient of magnetic field lines near the X-point. It is found that the width of the stochastic layer near the X-point and the loss of poloidal flux from inside the ideal separatrix scale linearly with average shear. The area of magnetic footprints scales roughly linearly with average shear. Linear scaling of the area is quite good when the average shear is greater than or equal to 1.25. When the average shear is in the range 1.1-1.25, the area of the footprint fluctuates (as a function of average shear) and scales faster than linear scaling. Radial diffusion of field lines near the X-point increases very rapidly by about four orders of magnitude as average shear increases from about 1.15 to 1.5. For higher values of average shear, diffusion increases linearly, and comparatively very slowly. The very slow scaling of the radial diffusion of the field can flatten the plasma pressure gradient near the separatrix, and lead to the elimination of type-I edge localized modes.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Efimov, V.; Tkachenko, E.G.

    It is shown that the well-known correlation between the triton binding energy and the nd doublet scattering length (the so-called Phillips line), which is observed in calculations, can be explained by smallness of the characteristic energies of the problem: the binding energies of the triton and deuteron: on the energy scale of nuclear forces. Equivalently, the Phillips line is a consequence of the diffuse structure of the triton and deuteron. These conclusions are obtained on the basis of qualitative consideration of the problem, calculation of the above correlation in the zero and linear approximation, and comparison of the calculated resultsmore » with the Phillips line.« less

  17. Mixing of a passive scalar in isotropic and sheared homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Shirani, E.; Ferziger, J. H.; Reynolds, W. C.

    1981-01-01

    In order to calculate the velocity and scalar fields, the three dimensional, time-dependent equations of motion and the diffusion equation were solved numerically. The following cases were treated: isotropic, homogeneous turbulence with decay of a passive scalar; and homogeneous turbulent shear flow with a passive scalar whose mean varies linearly in the spanwise direction. The solutions were obtained at relatively low Reynolds numbers so that all of the turbulent scales could be resolved without modeling. Turbulent statistics such as integral length scales, Taylor microscales, Kolmogorov length scale, one- and two-point correlations of velocity-velocity and velocity-scalar, turbulent Prandtl/Schmidt number, r.m.s. values of velocities, the scalar quantity and pressure, skewness, decay rates, and decay exponents were calculated. The results are compared with the available expermental results, and good agreement is obtained.

  18. Short-Term Memory in Orthogonal Neural Networks

    NASA Astrophysics Data System (ADS)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-04-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.

  19. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-01-01

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N2.5) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  20. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction.

    PubMed

    Song, Chenchen; Martínez, Todd J

    2017-01-21

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N 2.5 ) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  1. Relativistic initial conditions for N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less

  2. Parametric resonance in the early Universe—a fitting analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  3. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  4. General relativistic corrections to the weak lensing convergence power spectrum

    NASA Astrophysics Data System (ADS)

    Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.

    2017-11-01

    We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .

  5. Communication: Standard surface hopping predicts incorrect scaling for Marcus' golden-rule rate: The decoherence problem cannot be ignored

    NASA Astrophysics Data System (ADS)

    Landry, Brian R.; Subotnik, Joseph E.

    2011-11-01

    We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).

  6. Stabilization of electron-scale turbulence by electron density gradient in national spherical torus experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz Ruiz, J.; White, A. E.; Ren, Y.

    2015-12-15

    Theory and experiments have shown that electron temperature gradient (ETG) turbulence on the electron gyro-scale, k{sub ⊥}ρ{sub e} ≲ 1, can be responsible for anomalous electron thermal transport in NSTX. Electron scale (high-k) turbulence is diagnosed in NSTX with a high-k microwave scattering system [D. R. Smith et al., Rev. Sci. Instrum. 79, 123501 (2008)]. Here we report on stabilization effects of the electron density gradient on electron-scale density fluctuations in a set of neutral beam injection heated H-mode plasmas. We found that the absence of high-k density fluctuations from measurements is correlated with large equilibrium density gradient, which ismore » shown to be consistent with linear stabilization of ETG modes due to the density gradient using the analytical ETG linear threshold in F. Jenko et al. [Phys. Plasmas 8, 4096 (2001)] and linear gyrokinetic simulations with GS2 [M. Kotschenreuther et al., Comput. Phys. Commun. 88, 128 (1995)]. We also found that the observed power of electron-scale turbulence (when it exists) is anti-correlated with the equilibrium density gradient, suggesting density gradient as a nonlinear stabilizing mechanism. Higher density gradients give rise to lower values of the plasma frame frequency, calculated based on the Doppler shift of the measured density fluctuations. Linear gyrokinetic simulations show that higher values of the electron density gradient reduce the value of the real frequency, in agreement with experimental observation. Nonlinear electron-scale gyrokinetic simulations show that high electron density gradient reduces electron heat flux and stiffness, and increases the ETG nonlinear threshold, consistent with experimental observations.« less

  7. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  8. Laser pulsing in linear Compton scattering

    DOE PAGES

    Krafft, G. A.; Johnson, E.; Deitrick, K.; ...

    2016-12-16

    Previous work on calculating energy spectra from Compton scattering events has either neglected considering the pulsed structure of the incident laser beam, or has calculated these effects in an approximate way subject to criticism. In this paper, this problem has been reconsidered within a linear plane wave model for the incident laser beam. By performing the proper Lorentz transformation of the Klein-Nishina scattering cross section, a spectrum calculation can be created which allows the electron beam energy spread and emittance effects on the spectrum to be accurately calculated, essentially by summing over the emission of each individual electron. Such anmore » approach has the obvious advantage that it is easily integrated with a particle distribution generated by particle tracking, allowing precise calculations of spectra for realistic particle distributions in collision. The method is used to predict the energy spectrum of radiation passing through an aperture for the proposed Old Dominion University inverse Compton source. In addition, as discussed in the body of the paper, many of the results allow easy scaling estimates to be made of the expected spectrum. A misconception in the literature on Compton scattering of circularly polarized beams is corrected and recorded.« less

  9. Quinone 1 e – and 2 e – /2 H + Reduction Potentials: Identification and Analysis of Deviations from Systematic Scaling Relationships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, Mioy T.; Anson, Colin W.; Cavell, Andrew C.

    Quinones participate in diverse electron transfer and proton-coupled electron transfer processes in chemistry and biology. An experimental study of common quinones reveals a non-linear correlation between the 1 e – and 2 e –/2 H + reduction potentials. This unexpected observation prompted a computational study of 128 different quinones, probing their 1 e – reduction potentials, pKa values, and 2 e –/2 H + reduction potentials. The density functional theory calculations reveal an approximately linear correlation between these three properties and an effective Hammett constant associated with the quinone substituent(s). However, deviations from this linear scaling relationship are evident formore » quinones that feature halogen substituents, charged substituents, intramolecular hydrogen bonding in the hydroquinone, and/or sterically bulky substituents. These results, particularly the different substituent effects on the 1 e – versus 2 e – /2 H + reduction potentials, have important implications for designing quinones with tailored redox properties.« less

  10. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  11. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    PubMed

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  12. CMB hemispherical asymmetry from non-linear isocurvature perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan

    2015-04-01

    We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less

  13. A unified stochastic formulation of dissipative quantum dynamics. II. Beyond linear response of spin baths

    NASA Astrophysics Data System (ADS)

    Hsieh, Chang-Yu; Cao, Jianshu

    2018-01-01

    We use the "generalized hierarchical equation of motion" proposed in Paper I [C.-Y. Hsieh and J. Cao, J. Chem. Phys. 148, 014103 (2018)] to study decoherence in a system coupled to a spin bath. The present methodology allows a systematic incorporation of higher-order anharmonic effects of the bath in dynamical calculations. We investigate the leading order corrections to the linear response approximations for spin bath models. Two kinds of spin-based environments are considered: (1) a bath of spins discretized from a continuous spectral density and (2) a bath of localized nuclear or electron spins. The main difference resides with how the bath frequency and the system-bath coupling parameters are distributed in an environment. When discretized from a continuous spectral density, the system-bath coupling typically scales as ˜1 /√{NB } where NB is the number of bath spins. This scaling suppresses the non-Gaussian characteristics of the spin bath and justifies the linear response approximations in the thermodynamic limit. For the nuclear/electron spin bath models, system-bath couplings are directly deduced from spin-spin interactions and do not necessarily obey the 1 /√{NB } scaling. It is not always possible to justify the linear response approximations in this case. Furthermore, if the spin-spin Hamiltonian is highly symmetrical, there exist additional constraints that generate highly non-Markovian and persistent dynamics that is beyond the linear response treatments.

  14. Exciton Absorption Spectra by Linear Response Methods:Application to Conjugated Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosquera, Martin A.; Jackson, Nicholas E.; Fauvell, Thomas J.

    The theoretical description of the timeevolution of excitons requires, as an initial step, the calculation of their spectra, which has been inaccessible to most users due to the high computational scaling of conventional algorithms and accuracy issues caused by common density functionals. Previously (J. Chem. Phys. 2016, 144, 204105), we developed a simple method that resolves these issues. Our scheme is based on a two-step calculation in which a linear-response TDDFT calculation is used to generate orbitals perturbed by the excitonic state, and then a second linear-response TDDFT calculation is used to determine the spectrum of excitations relative to themore » excitonic state. Herein, we apply this theory to study near-infrared absorption spectra of excitons in oligomers of the ubiquitous conjugated polymers poly(3-hexylthiophene) (P3HT), poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV), and poly(benzodithiophene-thieno[3,4-b]thiophene) (PTB7). For P3HT and MEH-PPV oligomers, the calculated intense absorption bands converge at the longest wavelengths for 10 monomer units, and show strong consistency with experimental measurements. The calculations confirm that the exciton spectral features in MEH-PPV overlap with those of the bipolaron formation. In addition, our calculations identify the exciton absorption bands in transient absorption spectra measured by our group for oligomers (1, 2, and 3 units) of PTB7. For all of the cases studied, we report the dominant orbital excitations contributing to the optically active excited state-excited state transitions, and suggest a simple rule to identify absorption peaks at the longest wavelengths. We suggest our methodology could be considered for further evelopments in theoretical transient spectroscopy to include nonadiabatic effects, coherences, and to describe the formation of species such as charge-transfer states and polaron pairs.« less

  15. Surface Oscillations of a Free-Falling Droplet of an Ideal Fluid

    NASA Astrophysics Data System (ADS)

    Kistovich, A. V.; Chashechkin, Yu. D.

    2018-03-01

    According to observations, drops freely falling in the air under the action of gravity are deformed and oscillate in a wide range of frequencies and scales. A technique for calculating surface axisymmetric oscillations of a deformed droplet in the linear approximation under the assumption that the amplitude and wavelength are small when compared to the droplet diameter is proposed. The basic form of an axisymmetric droplet is chosen from observations. The calculation results for surface oscillations agree with recorded data on the varying shape of water droplets falling in the air.

  16. On-field study of anaerobic digestion full-scale plants (part I): an on-field methodology to determine mass, carbon and nutrients balance.

    PubMed

    Schievano, Andrea; D'Imporzano, Giuliana; Salati, Silvia; Adani, Fabrizio

    2011-09-01

    The mass balance (input/output mass flows) of full-scale anaerobic digestion (AD) processes should be known for a series of purposes, e.g. to understand carbon and nutrients balances, to evaluate the contribution of AD processes to elemental cycles, especially when digestates are applied to agricultural land and to measure the biodegradation yields and the process efficiency. In this paper, three alternative methods were studied, to determine the mass balance in full-scale processes, discussing their reliability and applicability. Through a 1-year survey on three full-scale AD plants and through 38 laboratory-scale batch digesters, the congruency of the considered methods was demonstrated and a linear equation was provided that allows calculating the wet weight losses (WL) from the methane produced (MP) by the plant (WL=41.949*MP+20.853, R(2)=0.950, p<0.01). Additionally, this new tool was used to calculate carbon, nitrogen, phosphorous and potassium balances of the three observed AD plants. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Slowly-rotating neutron stars in massive bigravity

    NASA Astrophysics Data System (ADS)

    Sullivan, A.; Yunes, N.

    2018-02-01

    We study slowly-rotating neutron stars in ghost-free massive bigravity. This theory modifies general relativity by introducing a second, auxiliary but dynamical tensor field that couples to matter through the physical metric tensor through non-linear interactions. We expand the field equations to linear order in slow rotation and numerically construct solutions in the interior and exterior of the star with a set of realistic equations of state. We calculate the physical mass function with respect to observer radius and find that, unlike in general relativity, this function does not remain constant outside the star; rather, it asymptotes to a constant a distance away from the surface, whose magnitude is controlled by the ratio of gravitational constants. The Vainshtein-like radius at which the physical and auxiliary mass functions asymptote to a constant is controlled by the graviton mass scaling parameter, and outside this radius, bigravity modifications are suppressed. We also calculate the frame-dragging metric function and find that bigravity modifications are typically small in the entire range of coupling parameters explored. We finally calculate both the mass-radius and the moment of inertia-mass relations for a wide range of coupling parameters and find that both the graviton mass scaling parameter and the ratio of the gravitational constants introduce large modifications to both. These results could be used to place future constraints on bigravity with electromagnetic and gravitational-wave observations of isolated and binary neutron stars.

  18. FT-IR, UV-vis, 1H and 13C NMR spectra and the equilibrium structure of organic dye molecule disperse red 1 acrylate: a combined experimental and theoretical analysis.

    PubMed

    Cinar, Mehmet; Coruh, Ali; Karabacak, Mehmet

    2011-12-01

    This study reports the characterization of disperse red 1 acrylate compound by spectral techniques and quantum chemical calculations. The spectroscopic properties were analyzed by FT-IR, UV-vis, (1)H NMR and (13)C NMR techniques. FT-IR spectrum in solid state was recorded in the region 4000-400 cm(-1). The UV-vis absorption spectrum of the compound that dissolved in methanol was recorded in the range of 200-800 nm. The (1)H and (13)C NMR spectra were recorded in CDCl(3) solution. The structural and spectroscopic data of the molecule in the ground state were calculated using density functional theory (DFT) employing B3LYP exchange correlation and the 6-311++G(d,p) basis set. The vibrational wavenumbers were calculated and scaled values were compared with experimental FT-IR spectrum. A satisfactory consistency between the experimental and theoretical spectra was obtained and it shows that the hybrid DFT method is very useful in predicting accurate vibrational structure, especially for high-frequency region. The complete assignments were performed on the basis of the experimental results and total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. Isotropic chemical shifts were calculated using the gauge-invariant atomic orbital (GIAO) method. A study on the electronic properties were performed by timedependent DFT (TD-DFT) and CIS(D) approach. To investigate non linear optical properties, the electric dipole moment μ, polarizability α, anisotropy of polarizability Δα and molecular first hyperpolarizability β were computed. The linear polarizabilities and first hyperpolarizabilities of the studied molecule indicate that the compound can be a good candidate of nonlinear optical materials. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Local unitary transformation method for large-scale two-component relativistic calculations: case for a one-electron Dirac Hamiltonian.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-06-28

    An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.

  20. Is the Jeffreys' scale a reliable tool for Bayesian model comparison in cosmology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesseris, Savvas; García-Bellido, Juan, E-mail: savvas.nesseris@uam.es, E-mail: juan.garciabellido@uam.es

    2013-08-01

    We are entering an era where progress in cosmology is driven by data, and alternative models will have to be compared and ruled out according to some consistent criterium. The most conservative and widely used approach is Bayesian model comparison. In this paper we explicitly calculate the Bayes factors for all models that are linear with respect to their parameters. We do this in order to test the so called Jeffreys' scale and determine analytically how accurate its predictions are in a simple case where we fully understand and can calculate everything analytically. We also discuss the case of nestedmore » models, e.g. one with M{sub 1} and another with M{sub 2} superset of M{sub 1} parameters and we derive analytic expressions for both the Bayes factor and the figure of Merit, defined as the inverse area of the model parameter's confidence contours. With all this machinery and the use of an explicit example we demonstrate that the threshold nature of Jeffreys' scale is not a ''one size fits all'' reliable tool for model comparison and that it may lead to biased conclusions. Furthermore, we discuss the importance of choosing the right basis in the context of models that are linear with respect to their parameters and how that basis affects the parameter estimation and the derived constraints.« less

  1. Efficient parallel linear scaling construction of the density matrix for Born-Oppenheimer molecular dynamics.

    PubMed

    Mniszewski, S M; Cawkwell, M J; Wall, M E; Mohd-Yusof, J; Bock, N; Germann, T C; Niklasson, A M N

    2015-10-13

    We present an algorithm for the calculation of the density matrix that for insulators scales linearly with system size and parallelizes efficiently on multicore, shared memory platforms with small and controllable numerical errors. The algorithm is based on an implementation of the second-order spectral projection (SP2) algorithm [ Niklasson, A. M. N. Phys. Rev. B 2002 , 66 , 155115 ] in sparse matrix algebra with the ELLPACK-R data format. We illustrate the performance of the algorithm within self-consistent tight binding theory by total energy calculations of gas phase poly(ethylene) molecules and periodic liquid water systems containing up to 15,000 atoms on up to 16 CPU cores. We consider algorithm-specific performance aspects, such as local vs nonlocal memory access and the degree of matrix sparsity. Comparisons to sparse matrix algebra implementations using off-the-shelf libraries on multicore CPUs, graphics processing units (GPUs), and the Intel many integrated core (MIC) architecture are also presented. The accuracy and stability of the algorithm are illustrated with long duration Born-Oppenheimer molecular dynamics simulations of 1000 water molecules and a 303 atom Trp cage protein solvated by 2682 water molecules.

  2. Time-sliced perturbation theory for large scale structure I: general formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less

  3. Calibration sets and the accuracy of vibrational scaling factors: A case study with the X3LYP hybrid functional

    NASA Astrophysics Data System (ADS)

    Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.

    2010-09-01

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  4. Calibration sets and the accuracy of vibrational scaling factors: a case study with the X3LYP hybrid functional.

    PubMed

    Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S

    2010-09-21

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  5. Numerical prediction of turbulent flame stability in premixed/prevaporized (HSCT) combustors

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1990-01-01

    A numerical analysis of combustion instabilities that induce flashback in a lean, premixed, prevaporized dump combustor is performed. KIVA-II, a finite volume CFD code for the modeling of transient, multidimensional, chemically reactive flows, serves as the principal analytical tool. The experiment of Proctor and T'ien is used as a reference for developing the computational model. An experimentally derived combustion instability mechanism is presented on the basis of the observations of Proctor and T'ien and other investigators of instabilities in low speed (M less than 0.1) dump combustors. The analysis comprises two independent procedures that begin from a calculated stable flame: The first is a linear increase of the equivalence ratio and the second is the linear decrease of the inflow velocity. The objective is to observe changes in the aerothermochemical features of the flow field prior to flashback. It was found that only the linear increase of the equivalence ratio elicits a calculated flashback result. Though this result did not exhibit large scale coherent vortices in the turbulent shear layer coincident with a flame flickering mode as was observed experimentally, there were interesting acoustic effects which were resolved quite well in the calculation. A discussion of the k-e turbulence model used by KIVA-II is prompted by the absence of combustion instabilities in the model as the inflow velocity is linearly decreased. Finally, recommendations are made for further numerical analysis that may improve correlation with experimentally observed combustion instabilities.

  6. Electron correlations in L-subshell photoionization of intermediate-Z elements (47<=Z<=51)

    NASA Astrophysics Data System (ADS)

    Jitschin, W.; Stötzel, R.

    1998-08-01

    The x-ray mass attenuation of 48Cd, 49In, 50Sn, and 51Sb in the energy regime of the L-subshell edges has been measured. For a comparison of the data of neighboring elements, these were scaled to 47Ag. The scaled data were compared with theoretical calculations of photoionization cross sections by Scofield, which use the common single electron approach. The comparison reveals minor but significant deviations between measurement and calculation: The measured cross sections are smaller than the prediction in the regime between the L3 and L2 edges, they have a flatter slope in the regime between the L2 and L1 edges, and they exhibit a decrease just above the L3 and L2 edges. All observed deviations can be explained as electron correlation effects originating from a polarization of the whole electron cloud by the ionizing radiation, since they are qualitatively reproduced by comparative calculations of the ionization process either omitting (independent particle approach) or including (in the linear response approximation) the electron correlations. However, the comparative calculations quantitatively overestimate the electron correlation effects.

  7. Repopulation Kinetics and the Linear-Quadratic Model

    NASA Astrophysics Data System (ADS)

    O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.

    2009-08-01

    The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Abild-Pedersen, Frank

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  9. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  10. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  11. Parallel scalability of Hartree-Fock calculations

    NASA Astrophysics Data System (ADS)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  12. LSMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai; Liu, Xianglin

    2017-12-01

    LSMS is a first principles, Density Functional theory based, electronic structure code targeted mainly at materials applications. LSMS calculates the local spin density approximation to the diagonal part of the electron Green's function. The electron/spin density and energy are easily determined once the Green's function is known. Linear scaling with system size is achieved in the LSMS by using several unique properties of the real space multiple scattering approach to the Green's function.

  13. Effect of lensing non-Gaussianity on the CMB power spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com

    2016-12-01

    Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less

  14. Scaling properties of ballistic nano-transistors

    PubMed Central

    2011-01-01

    Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899

  15. Beam-driven acceleration in ultra-dense plasma media

    DOE PAGES

    Shin, Young-Min

    2014-09-15

    Accelerating parameters of beam-driven wakefield acceleration in an extremely dense plasma column has been analyzed with the dynamic framed particle-in-cell plasma simulator, and compared with analytic calculations. In the model, a witness beam undergoes a TeV/m scale alternating potential gradient excited by a micro-bunched drive beam in a 10 25 m -3 and 1.6 x 10 28 m -3 plasma column. The acceleration gradient, energy gain, and transformer ratio have been extensively studied in quasi-linear, linear-, and blowout-regimes. The simulation analysis indicated that in the beam-driven acceleration system a hollow plasma channel offers 20 % higher acceleration gradient by enlargingmore » the channel radius (r) from 0.2 Ap to 0.6 .Ap in a blowout regime. This paper suggests a feasibility of TeV/m scale acceleration with a hollow crystalline structure (e.g. nanotubes) of high electron plasma density.« less

  16. Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk

    NASA Astrophysics Data System (ADS)

    Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi

    2016-09-01

    Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.

  17. Density scaling of phantom materials for a 3D dose verification system.

    PubMed

    Tani, Kensuke; Fujita, Yukio; Wakita, Akihisa; Miyasaka, Ryohei; Uehara, Ryuzo; Kodama, Takumi; Suzuki, Yuya; Aikawa, Ako; Mizuno, Norifumi; Kawamori, Jiro; Saitoh, Hidetoshi

    2018-05-21

    In this study, the optimum density scaling factors of phantom materials for a commercially available three-dimensional (3D) dose verification system (Delta4) were investigated in order to improve the accuracy of the calculated dose distributions in the phantom materials. At field sizes of 10 × 10 and 5 × 5 cm 2 with the same geometry, tissue-phantom ratios (TPRs) in water, polymethyl methacrylate (PMMA), and Plastic Water Diagnostic Therapy (PWDT) were measured, and TPRs in various density scaling factors of water were calculated by Monte Carlo simulation, Adaptive Convolve (AdC, Pinnacle 3 ), Collapsed Cone Convolution (CCC, RayStation), and AcurosXB (AXB, Eclipse). Effective linear attenuation coefficients (μ eff ) were obtained from the TPRs. The ratios of μ eff in phantom and water ((μ eff ) pl,water ) were compared between the measurements and calculations. For each phantom material, the density scaling factor proposed in this study (DSF) was set to be the value providing a match between the calculated and measured (μ eff ) pl,water . The optimum density scaling factor was verified through the comparison of the dose distributions measured by Delta4 and calculated with three different density scaling factors: the nominal physical density (PD), nominal relative electron density (ED), and DSF. Three plans were used for the verifications: a static field of 10 × 10 cm 2 and two intensity modulated radiation therapy (IMRT) treatment plans. DSF were determined to be 1.13 for PMMA and 0.98 for PWDT. DSF for PMMA showed good agreement for AdC and CCC with 6 MV x ray, and AdC for 10 MV x ray. DSF for PWDT showed good agreement regardless of the dose calculation algorithms and x-ray energy. DSF can be considered one of the references for the density scaling factor of Delta4 phantom materials and may help improve the accuracy of the IMRT dose verification using Delta4. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. The accuracy of the Gaussian-and-finite-element-Coulomb (GFC) method for the calculation of Coulomb integrals.

    PubMed

    Przybytek, Michal; Helgaker, Trygve

    2013-08-07

    We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.

  19. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  20. Scale-Invariant Forms of Conservation Equations in Reactive Fields and a Modified Hydro-Thermo-Diffusive Theory of Laminar Flames

    NASA Technical Reports Server (NTRS)

    Sohrab, Siavash H.; Piltch, Nancy (Technical Monitor)

    2000-01-01

    A scale-invariant model of statistical mechanics is applied to present invariant forms of mass, energy, linear, and angular momentum conservation equations in reactive fields. The resulting conservation equations at molecular-dynamic scale are solved by the method of large activation energy asymptotics to describe the hydro-thermo-diffusive structure of laminar premixed flames. The predicted temperature and velocity profiles are in agreement with the observations. Also, with realistic physico-chemical properties and chemical-kinetic parameters for a single-step overall combustion of stoichiometric methane-air premixed flame, the laminar flame propagation velocity of 42.1 cm/s is calculated in agreement with the experimental value.

  1. Macroweather Predictions and Climate Projections using Scaling and Historical Observations

    NASA Astrophysics Data System (ADS)

    Hébert, R.; Lovejoy, S.; Del Rio Amador, L.

    2017-12-01

    There are two fundamental time scales that are pertinent to decadal forecasts and multidecadal projections. The first is the lifetime of planetary scale structures, about 10 days (equal to the deterministic predictability limit), and the second is - in the anthropocene - the scale at which the forced anthropogenic variability exceeds the internal variability (around 16 - 18 years). These two time scales define three regimes of variability: weather, macroweather and climate that are respectively characterized by increasing, decreasing and then increasing varibility with scale.We discuss how macroweather temperature variability can be skilfully predicted to its theoretical stochastic predictability limits by exploiting its long-range memory with the Stochastic Seasonal and Interannual Prediction System (StocSIPS). At multi-decadal timescales, the temperature response to forcing is approximately linear and this can be exploited to make projections with a Green's function, or Climate Response Function (CRF). To make the problem tractable, we exploit the temporal scaling symmetry and restrict our attention to global mean forcing and temperature response using a scaling CRF characterized by the scaling exponent H and an inner scale of linearity τ. An aerosol linear scaling factor α and a non-linear volcanic damping exponent ν were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference using historical data and these allow us to analytically calculate a median (and likely 66% range) for the transient climate response, and for the equilibrium climate sensitivity: 1.6K ([1.5,1.8]K) and 2.4K ([1.9,3.4]K) respectively. Aerosol forcing typically has large uncertainty and we find a modern (2005) forcing very likely range (90%) of [-1.0, -0.3] Wm-2 with median at -0.7 Wm-2. Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to Representative Concentration Pathway (RCP) 2.6 for which the probability to remain under 1.5 K is 48%. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability. This underscores that over the next century, the state of the environment will be strongly influenced by past, present and future economical policies.

  2. Concise calculation of the scaling function, exponents, and probability functional of the Edwards-Wilkinson equation with correlated noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y.; Pang, N.; Halpin-Healy, T.

    1994-12-01

    The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.

  3. Natural Covariant Planck Scale Cutoffs and the Cosmic Microwave Background Spectrum.

    PubMed

    Chatwin-Davies, Aidan; Kempf, Achim; Martin, Robert T W

    2017-07-21

    We calculate the impact of quantum gravity-motivated ultraviolet cutoffs on inflationary predictions for the cosmic microwave background spectrum. We model the ultraviolet cutoffs fully covariantly to avoid possible artifacts of covariance breaking. Imposing these covariant cutoffs results in the production of small, characteristically k-dependent oscillations in the spectrum. The size of the effect scales linearly with the ratio of the Planck to Hubble lengths during inflation. Consequently, the relative size of the effect could be as large as one part in 10^{5}; i.e., eventual observability may not be ruled out.

  4. A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method

    NASA Astrophysics Data System (ADS)

    Ward, A. J.; Pendry, J. B.

    2000-06-01

    In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.

  5. Application of the Extended Completeness Relation to the Absorbing Boundary Condition

    NASA Astrophysics Data System (ADS)

    Iwasaki, Masataka; Otani, Reiji; Ito, Makoto

    The strength function of the linear response by the external field is calculated in the formalism of the absorbing boundary condition (ABC). The dipole excitation of a schematic two-body system is treated in the present study. The extended completeness relation, which is assumed on the analogy of the formulation in the complex scaling method (CSM), is applied to the calculation of the strength function. The calculation of the strength function is successful in the present formalism and hence, the extended completeness relation seems to work well in the ABC formalism. The contributions from the resonance and the non-resonant continuum are also analyzed according to the decomposition of the energy levels in the extended completeness relation.

  6. Elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation.

    PubMed

    Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen

    2016-11-01

    Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.

  7. Scale-free network provides an optimal pattern for knowledge transfer

    NASA Astrophysics Data System (ADS)

    Lin, Min; Li, Nan

    2010-02-01

    We study numerically the knowledge innovation and diffusion process on four representative network models, such as regular networks, small-world networks, random networks and scale-free networks. The average knowledge stock level as a function of time is measured and the corresponding growth diffusion time, τ is defined and computed. On the four types of networks, the growth diffusion times all depend linearly on the network size N as τ∼N, while the slope for scale-free network is minimal indicating the fastest growth and diffusion of knowledge. The calculated variance and spatial distribution of knowledge stock illustrate that optimal knowledge transfer performance is obtained on scale-free networks. We also investigate the transient pattern of knowledge diffusion on the four networks, and a qualitative explanation of this finding is proposed.

  8. Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu

    2016-02-15

    We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less

  9. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  10. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    PubMed

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  11. Simulated quantum computation of molecular energies.

    PubMed

    Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin

    2005-09-09

    The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.

  12. Self-Consistent Field Theories for the Role of Large Length-Scale Architecture in Polymers

    NASA Astrophysics Data System (ADS)

    Wu, David

    At large length-scales, the architecture of polymers can be described by a coarse-grained specification of the distribution of branch points and monomer types within a molecule. This includes molecular topology (e.g., cyclic or branched) as well as distances between branch points or chain ends. Design of large length-scale molecular architecture is appealing because it offers a universal strategy, independent of monomer chemistry, to tune properties. Non-linear analogs of linear chains differ in molecular-scale properties, such as mobility, entanglements, and surface segregation in blends that are well-known to impact rheological, dynamical, thermodynamic and surface properties including adhesion and wetting. We have used Self-Consistent Field (SCF) theories to describe a number of phenomena associated with large length-scale polymer architecture. We have predicted the surface composition profiles of non-linear chains in blends with linear chains. These predictions are in good agreement with experimental results, including from neutron scattering, on a range of well-controlled branched (star, pom-pom and end-branched) and cyclic polymer architectures. Moreover, the theory allows explanation of the segregation and conformations of branched polymers in terms of effective surface potentials acting on the end and branch groups. However, for cyclic chains, which have no end or junction points, a qualitatively different topological mechanism based on conformational entropy drives cyclic chains to a surface, consistent with recent neutron reflectivity experiments. We have also used SCF theory to calculate intramolecular and intermolecular correlations for polymer chains in the bulk, dilute solution, and trapped at a liquid-liquid interface. Predictions of chain swelling in dilute star polymer solutions compare favorably with existing PRISM theory and swelling at an interface helps explain recent measurements of chain mobility at an oil-water interface. In collaboration with: Renfeng Hu, Colorado School of Mines, and Mark Foster, University of Akron. This work was supported by NSF Grants No. CBET- 0730692 and No. CBET-0731319.

  13. Parallel computation of fluid-structural interactions using high resolution upwind schemes

    NASA Astrophysics Data System (ADS)

    Hu, Zongjun

    An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different frequencies and incidence angles. The Zha CUSP schemes are the first time to be applied in moving grid systems and 2D and 3D calculations. The implicit Gauss-Seidel iteration with dual time stepping is the first time to be used for moving grid systems. The NASA flutter cascade is the first time to be calculated in full scale.

  14. A Scaling Model for the Anthropocene Climate Variability with Projections to 2100

    NASA Astrophysics Data System (ADS)

    Hébert, Raphael; Lovejoy, Shaun

    2017-04-01

    The determination of the climate sensitivity to radiative forcing is a fundamental climate science problem with important policy implications. We use a scaling model, with a limited set of parameters, which can directly calculate the forced globally-average surface air temperature response to anthropogenic and natural forcings. At timescales larger than an inner scale τ, which we determine as the ocean-atmosphere coupling scale at around 2 years, the global system responds, approximately, linearly, so that the variability may be decomposed into additive forced and internal components. The Ruelle response theory extends the classical linear response theory for small perturbations to systems far from equilibrium. Our model thus relates radiative forcings to a forced temperature response by convolution with a suitable Green's function, or climate response function. Motivated by scaling symmetries which allow for long range dependence, we assume a general scaling form, a scaling climate response function (SCRF) which is able to produce a wide range of responses: a power-law truncated at τ. This allows us to analytically calculate the climate sensitivity at different time scales, yielding a one-to-one relation from the transient climate response to the equilibrium climate sensitivity which are estimated, respectively, as 1.6+0.3-0.2K and 2.4+1.3-0.6K at the 90 % confidence level. The model parameters are estimated within a Bayesian framework, with a fractional Gaussian noise error model as the internal variability, from forcing series, instrumental surface temperature datasets and CMIP5 GCMs Representative Concentration Pathways (RCP) scenario runs. This observation based model is robust and projections for the coming century are made following the RCP scenario 2.6, 4.5 and 8.5, yielding in the year 2100, respectively : 1.5 +0.3)_{-0.2K, 2.3 ± 0.4 K and 4.0 ± 0.6 K at the 90 % confidence level. For comparison, the associated projections from a CMIP5 multi-model ensemble(MME) (32 models) are: 1.7 ± 0.8 K, 2.6 ± 0.8 K and 4.8 ± 1.3 K. Therefore, our projection uncertainty is less than half the structural uncertainty of this CMIP5 MME.

  15. Large scale EMF in current sheets induced by tearing modes

    NASA Astrophysics Data System (ADS)

    Mizerski, Krzysztof A.

    2018-02-01

    An extension of the analysis of resistive instabilities of a sheet pinch from a famous work by Furth et al (1963 Phys. Fluids 6 459) is presented here, to study the mean electromotive force (EMF) generated by the developing instability. In a Cartesian configuration and in the presence of a current sheet first the boundary layer technique is used to obtain global, matched asymptotic solutions for the velocity and magnetic field and then the solutions are used to calculate the large-scale EMF in the system. It is reported, that in the bulk the curl of the mean EMF is linear in {{j}}0\\cdot {{B}}0, a simple pseudo-scalar quantity constructed from the large-scale quantities.

  16. A non-linear theory of the parallel firehose and gyrothermal instabilities in a weakly collisional plasma

    NASA Astrophysics Data System (ADS)

    Rosin, M. S.; Schekochihin, A. A.; Rincon, F.; Cowley, S. C.

    2011-05-01

    Weakly collisional magnetized cosmic plasmas have a dynamical tendency to develop pressure anisotropies with respect to the local direction of the magnetic field. These anisotropies trigger plasma instabilities at scales just above the ion Larmor radius ρi and much below the mean free path λmfp. They have growth rates of a fraction of the ion cyclotron frequency, which is much faster than either the global dynamics or even local turbulence. Despite their microscopic nature, these instabilities dramatically modify the transport properties and, therefore, the macroscopic dynamics of the plasma. The non-linear evolution of these instabilities is expected to drive pressure anisotropies towards marginal stability values, controlled by the plasma beta βi. Here this non-linear evolution is worked out in an ab initio kinetic calculation for the simplest analytically tractable example - the parallel (k⊥= 0) firehose instability in a high-beta plasma. An asymptotic theory is constructed, based on a particular physical ordering and leading to a closed non-linear equation for the firehose turbulence. In the non-linear regime, both the analytical theory and the numerical solution predict secular (∝t) growth of magnetic fluctuations. The fluctuations develop a k-3∥ spectrum, extending from scales somewhat larger than ρi to the maximum scale that grows secularly with time (∝t1/2); the relative pressure anisotropy (p⊥-p∥)/p∥ tends to the marginal value -2/βi. The marginal state is achieved via changes in the magnetic field, not particle scattering. When a parallel ion heat flux is present, the parallel firehose mutates into the new gyrothermal instability (GTI), which continues to exist up to firehose-stable values of pressure anisotropy, which can be positive and are limited by the magnitude of the ion heat flux. The non-linear evolution of the GTI also features secular growth of magnetic fluctuations, but the fluctuation spectrum is eventually dominated by modes around a maximal scale ˜ρilT/λmfp, where lT is the scale of the parallel temperature variation. Implications for momentum and heat transport are speculated about. This study is motivated by our interest in the dynamics of galaxy cluster plasmas (which are used as the main astrophysical example), but its relevance to solar wind and accretion flow plasmas is also briefly discussed.

  17. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  18. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  19. [Computer-assisted phonetography as a diagnostic aid in functional dysphonia].

    PubMed

    Airainer, R; Klingholz, F

    1991-07-01

    A total of 160 voice-trained and untrained subjects with functional dysphonia were given a "clinical rating" according to their clinical findings. This was a certain value on a scale that recorded the degree of functional voice disorder ranging from a marked hypofunction to an extreme hyperfunction. The phonetograms of these patients were approximated by ellipses, whereby the definition and quantitative recording of several phonetogram parameters were rendered possible. By means of a linear combination of phonetogram parameters, a "calculated assessment" was obtained for each patient that was expected to tally with the "clinical rating". This paper demonstrates that a graduation of the dysphonic clinical picture with regard to the presence of hypofunctional or hyperfunctional components is possible via computerised phonetogram evaluation. In this case, the "calculated assessments" for both male and female singers and non-singers must be computed using different linear combinations. The method can be introduced as a supplementary diagnostic procedure in the diagnosis of functional dysphonia.

  20. Conductivity and transit time estimates of a soil liner

    USGS Publications Warehouse

    Krapac, I.G.; Cartwright, K.; Panno, S.V.; Hensel, B.R.; Rehfeldt, K.H.; Herzog, B.L.

    1990-01-01

    A field-scale soil linear was built to assess the feasibilty of constructing a liner to meet the saturated hydraulic conductivity requirement of the U.S. EPA (i.e., less than 1 ?? 10-7 cm/s), and to determine the breakthrough and transit times of water and tracers through the liner. The liner, 8 ?? 15 ?? 0.9 m, was constructed in 15-cm compacted lifts using a 20,037-kg pad-foot compactor and standard engineering practices. Estimated saturated hydraulic conductivities were 2.4 ?? 10-9 cm/s, based on data from large-ring infiltrometers; 4.0 ?? 10-8 cm/s from small-ring infiltrometers; and 5.0 ?? 10-8 cm/s from a water-balance analysis. These estimates were derived from 1 year of monitoring water infiltration into the linear. Breakthrough of tracers at the base of the liner was estimated to be between 2 and 13 years, depending on the method of calculation and the assumptions used in the calculation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko

    Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.

  2. Scale Up Considerations for Sediment Microbial Fuel Cells

    DTIC Science & Technology

    2013-01-01

    density calculations were made once WPs stabilized for each system. Linear sweep voltametry was then used on these systems to generate polarization and...power density curves. The systems were allowed to equilibrate under open circuit conditions (about 12 h) before a potential sweep was performed with a...reference. The potential sweep was set to begin at the anode potential under open circuit conditions (20.4 V vs. Ag/AgCl) and was raised to the

  3. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  4. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  5. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  6. Protein linear indices of the 'macromolecular pseudograph alpha-carbon atom adjacency matrix' in bioinformatics. Part 1: prediction of protein stability effects of a complete set of alanine substitutions in Arc repressor.

    PubMed

    Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A

    2005-04-15

    A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.

  7. Run-up of Tsunamis in the Gulf of Mexico caused by the Chicxulub Impact Event

    NASA Astrophysics Data System (ADS)

    Weisz, R.; Wünnenmann, K.; Bahlburg, H.

    2003-04-01

    The Chicxulub impact event can be investigated on (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the run-up of tsunami waves on the coast around the Gulf of Mexico caused by the impact. An impact produces two types of tsunami waves: (1) the rim wave, (2) the collapse wave. Both waves propagate over long distances and reach coastal areas. Depending on the tsunami wave characteristics, they have a potentionally large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for assessing this influence. To calculate these parameters, we are using a multi-material hydrocode (SALE) to simulate the generation of the tsunami wave, a non-linear shallow water approach for the propagation, and we implemented a special open boundary for considering the run-up of tsunami waves. With the help of the one-dimensional shallow water approach, we will give run-up heights and distances for the coastal area around the Gulf of Mexico. The calculations are done along several sections from the impact site towards the coast. These are a first approximation to run-up calculations for the entire coast of the Gulf of Mexico. The bathymetric data along the sections, used in the wave propagation and run-up, correspond to a linearized bathymetry of the recent Gulf of Mexico. Additionally, we will present preliminary results from our first two-dimensional experiments of propagation and run-up. These results will be compared with the one-dimensional approach.

  8. Linear actuation using milligram quantities of CL-20 and TAGDNAT.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snedigar, Shane; Salton, Jonathan Robert; Tappan, Alexander Smith

    2009-07-01

    There are numerous applications for small-scale actuation utilizing pyrotechnics and explosives. In certain applications, especially when multiple actuation strokes are needed, or actuator reuse is required, it is desirable to have all gaseous combustion products with no condensed residue in the actuator cylinder. Toward this goal, we have performed experiments on utilizing milligram quantities of high explosives to drive a millimeter-diameter actuator with a stroke of 30 mm. Calculations were performed to select proper material quantities to provide 0.5 J of actuation energy. This was performed utilizing the thermochemical code Cheetah to calculate the impetus for numerous propellants and tomore » select quantities based on estimated efficiencies of these propellants at small scales. Milligram quantities of propellants were loaded into a small-scale actuator and ignited with an ignition increment and hot wire ignition. Actuator combustion chamber pressure was monitored with a pressure transducer and actuator stroke was monitored using a laser displacement meter. Total actuation energy was determined by calculating the kinetic energy of reaction mass motion against gravity. Of the materials utilized, the best performance was obtained with a mixture of 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20) and bis-triaminoguanidinium(3,3{prime}dinitroazotriazolate) (TAGDNAT).« less

  9. Molecular structure, vibrational spectra, NBO analysis, first hyperpolarizability, and HOMO-LUMO studies of 2-amino-4-hydroxypyrimidine by density functional method

    NASA Astrophysics Data System (ADS)

    Jeyavijayan, S.

    2015-04-01

    This study is a comparative analysis of FTIR and FT-Raman spectra of 2-amino-4-hydroxypyrimidine. The total energies of different conformations have been obtained from DFT (B3LYP) method with 6-31+G(d,p) and 6-311++G(d,p) basis sets. The barrier of planarity between the most stable and planar form is also predicted. The molecular structure, vibrational wavenumbers, infrared intensities, Raman scattering activities were calculated for the molecule using the B3LYP density functional theory (DFT) method. The computed values of frequencies are scaled using multiple scaling factors to yield good coherence with the observed values. Reliable vibrational assignments were made on the basis of total energy distribution (TED) along with scaled quantum mechanical (SQM) method. The stability of the molecule arising from hyperconjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Non-linear properties such as electric dipole moment (μ), polarizability (α), and hyperpolarizability (β) values of the investigated molecule have been computed using B3LYP quantum chemical calculation. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. Besides, molecular electrostatic potential (MEP), Mulliken's charges analysis, and several thermodynamic properties were performed by the DFT method.

  10. Highly efficient implementation of pseudospectral time-dependent density-functional theory for the calculation of excitation energies of large molecules.

    PubMed

    Cao, Yixiang; Hughes, Thomas; Giesen, Dave; Halls, Mathew D; Goldberg, Alexander; Vadicherla, Tati Reddy; Sastry, Madhavi; Patel, Bhargav; Sherman, Woody; Weisman, Andrew L; Friesner, Richard A

    2016-06-15

    We have developed and implemented pseudospectral time-dependent density-functional theory (TDDFT) in the quantum mechanics package Jaguar to calculate restricted singlet and restricted triplet, as well as unrestricted excitation energies with either full linear response (FLR) or the Tamm-Dancoff approximation (TDA) with the pseudospectral length scales, pseudospectral atomic corrections, and pseudospectral multigrid strategy included in the implementations to improve the chemical accuracy and to speed the pseudospectral calculations. The calculations based on pseudospectral time-dependent density-functional theory with full linear response (PS-FLR-TDDFT) and within the Tamm-Dancoff approximation (PS-TDA-TDDFT) for G2 set molecules using B3LYP/6-31G*(*) show mean and maximum absolute deviations of 0.0015 eV and 0.0081 eV, 0.0007 eV and 0.0064 eV, 0.0004 eV and 0.0022 eV for restricted singlet excitation energies, restricted triplet excitation energies, and unrestricted excitation energies, respectively; compared with the results calculated from the conventional spectral method. The application of PS-FLR-TDDFT to OLED molecules and organic dyes, as well as the comparisons for results calculated from PS-FLR-TDDFT and best estimations demonstrate that the accuracy of both PS-FLR-TDDFT and PS-TDA-TDDFT. Calculations for a set of medium-sized molecules, including Cn fullerenes and nanotubes, using the B3LYP functional and 6-31G(**) basis set show PS-TDA-TDDFT provides 19- to 34-fold speedups for Cn fullerenes with 450-1470 basis functions, 11- to 32-fold speedups for nanotubes with 660-3180 basis functions, and 9- to 16-fold speedups for organic molecules with 540-1340 basis functions compared to fully analytic calculations without sacrificing chemical accuracy. The calculations on a set of larger molecules, including the antibiotic drug Ramoplanin, the 46-residue crambin protein, fullerenes up to C540 and nanotubes up to 14×(6,6), using the B3LYP functional and 6-31G(**) basis set with up to 8100 basis functions show that PS-FLR-TDDFT CPU time scales as N(2.05) with the number of basis functions. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  12. A model for tides and currents in the English Channel and southern North Sea

    USGS Publications Warehouse

    Walters, Roy A.

    1987-01-01

    The amplitude and phase of 11 tidal constituents for the English Channel and southern North Sea are calculated using a frequency domain, finite element model. The governing equations - the shallow water equations - are modifed such that sea level is calculated using an elliptic equation of the Helmholz type followed by a back-calculation of velocity using the primitive momentum equations. Triangular elements with linear basis functions are used. The modified form of the governing equations provides stable solutions with little numerical noise. In this field-scale test problem, the model was able to produce the details of the structure of 11 tidal constituents including O1, K1, M2, S2, N2, K2, M4, MS4, MN4, M6, and 2MS6.

  13. Influence of Turbulent Flow and Fractal Scaling on Effective Permeability of Fracture Network

    NASA Astrophysics Data System (ADS)

    Zhu, J.

    2017-12-01

    A new approach is developed to calculate hydraulic gradient dependent effective permeability of a fractal fracture network where both laminar and turbulent flows may occur in individual fractures. A critical fracture length is used to distinguish flow characteristics in individual fractures. The developed new solutions can be used for the case of a general scaling relationship, an extension to the linear scaling. We examine the impact on the effective permeability of the network of fractal fracture network characteristics, which include the fractal scaling coefficient and exponent, fractal dimension, ratio of minimum over maximum fracture lengths. Results demonstrate that the developed solution can explain more variations of the effective permeability in relation to the fractal dimensions estimated from the field observations. At high hydraulic gradient the effective permeability decreases with the fractal scaling exponent, but increases with the fractal scaling exponent at low gradient. The effective permeability increases with the scaling coefficient, fractal dimension, fracture length ratio and maximum fracture length.

  14. Representation of the exact relativistic electronic Hamiltonian within the regular approximation

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2003-12-01

    The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.

  15. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  16. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  17. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  18. Rapid sampling of stochastic displacements in Brownian dynamics simulations

    NASA Astrophysics Data System (ADS)

    Fiore, Andrew M.; Balboa Usabiaga, Florencio; Donev, Aleksandar; Swan, James W.

    2017-03-01

    We present a new method for sampling stochastic displacements in Brownian Dynamics (BD) simulations of colloidal scale particles. The method relies on a new formulation for Ewald summation of the Rotne-Prager-Yamakawa (RPY) tensor, which guarantees that the real-space and wave-space contributions to the tensor are independently symmetric and positive-definite for all possible particle configurations. Brownian displacements are drawn from a superposition of two independent samples: a wave-space (far-field or long-ranged) contribution, computed using techniques from fluctuating hydrodynamics and non-uniform fast Fourier transforms; and a real-space (near-field or short-ranged) correction, computed using a Krylov subspace method. The combined computational complexity of drawing these two independent samples scales linearly with the number of particles. The proposed method circumvents the super-linear scaling exhibited by all known iterative sampling methods applied directly to the RPY tensor that results from the power law growth of the condition number of tensor with the number of particles. For geometrically dense microstructures (fractal dimension equal three), the performance is independent of volume fraction, while for tenuous microstructures (fractal dimension less than three), such as gels and polymer solutions, the performance improves with decreasing volume fraction. This is in stark contrast with other related linear-scaling methods such as the force coupling method and the fluctuating immersed boundary method, for which performance degrades with decreasing volume fraction. Calculations for hard sphere dispersions and colloidal gels are illustrated and used to explore the role of microstructure on performance of the algorithm. In practice, the logarithmic part of the predicted scaling is not observed and the algorithm scales linearly for up to 4 ×106 particles, obtaining speed ups of over an order of magnitude over existing iterative methods, and making the cost of computing Brownian displacements comparable to the cost of computing deterministic displacements in BD simulations. A high-performance implementation employing non-uniform fast Fourier transforms implemented on graphics processing units and integrated with the software package HOOMD-blue is used for benchmarking.

  19. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  20. Simulations of nanocrystals under pressure: combining electronic enthalpy and linear-scaling density-functional theory.

    PubMed

    Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  1. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    NASA Astrophysics Data System (ADS)

    Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.

    2013-08-01

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  2. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  3. Scattering theory of nonlinear thermoelectricity in quantum coherent conductors.

    PubMed

    Meair, Jonathan; Jacquod, Philippe

    2013-02-27

    We construct a scattering theory of weakly nonlinear thermoelectric transport through sub-micron scale conductors. The theory incorporates the leading nonlinear contributions in temperature and voltage biases to the charge and heat currents. Because of the finite capacitances of sub-micron scale conducting circuits, fundamental conservation laws such as gauge invariance and current conservation require special care to be preserved. We do this by extending the approach of Christen and Büttiker (1996 Europhys. Lett. 35 523) to coupled charge and heat transport. In this way we write relations connecting nonlinear transport coefficients in a manner similar to Mott's relation between the linear thermopower and the linear conductance. We derive sum rules that nonlinear transport coefficients must satisfy to preserve gauge invariance and current conservation. We illustrate our theory by calculating the efficiency of heat engines and the coefficient of performance of thermoelectric refrigerators based on quantum point contacts and resonant tunneling barriers. We identify, in particular, rectification effects that increase device performance.

  4. Linear-scaling generation of potential energy surfaces using a double incremental expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk

    We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less

  5. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  6. TU-EF-304-06: A Comparison of CT Number to Relative Linear Stopping Power Conversion Curves Used by Proton Therapy Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P; Lowenstein, J; Kry, S

    Purpose: To compare the CT Number (CTN) to Relative Linear Stopping Power (RLSP) conversion curves used by 14 proton institutions in their dose calculations. Methods: The proton institution’s CTN to RLSP conversion curves were collected by the Imaging and Radiation Oncology Core (IROC) Houston QA Center during its on-site dosimetry review audits. The CTN values were converted to scaled CT Numbers. The scaling assigns a CTN of 0 to air and 1000 to water to allow intercomparison. The conversion curves were compared and the mean curve was calculated based on institutions’ predicted RLSP values for air (CTN 0), lung (CTNmore » 250), fat (CTN 950), water (1000), liver (CTN 1050), and bone (CTN 2000) points. Results: One institution’s curve was found to have a unique curve shape between the scaled CTN of 1025 to 1225. This institution modified its curve based on the findings. Another institution had higher RLSP values than expected for both low and high CTNs. This institution recalibrated their two CT scanners and the new data placed their curve closer to the mean of all institutions. After corrections were made to several conversion curves, four institutions still fall outside 2 standard deviations at very low CTNs (100–200), and two institutions fall outside between CTN 850–900. The largest percent difference in RLSP values between institutions for the specific tissues reviewed was 22% for the lung point. Conclusion: The review and comparison of CTN to RLSP conversion curves allows IROC Houston to identify any outliers and make recommendations for improvement. Several institutions improved their clinical dose calculation accuracy as a Result of this review. There is still area for improvement, particularly in the lung area of the curve. The IROC Houston QA Center is supported by NCI grant CA180803.« less

  7. Investigation of scale effects in the TRF determined by VLBI

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.

  8. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  9. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  10. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  11. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  12. Synthesis, spectroscopic characterization and quantum chemical computational studies of (S)-N-benzyl-1-phenyl-5-(pyridin-2-yl)-pent-4-yn-2-amine

    NASA Astrophysics Data System (ADS)

    Kose, Etem; Atac, Ahmet; Karabacak, Mehmet; Karaca, Caglar; Eskici, Mustafa; Karanfil, Abdullah

    2012-11-01

    The synthesis and characterization of a novel compound (S)-N-benzyl-1-phenyl-5-(pyridin-2-yl)-pent-4-yn-2-amine (abbreviated as BPPPYA) was presented in this study. The spectroscopic properties of the compound were investigated by FT-IR, NMR and UV spectroscopy experimentally and theoretically. The molecular geometry and vibrational frequencies of the BPPPYA in the ground state were calculated by using density functional theory (DFT) B3LYP method invoking 6-311++G(d,p) basis set. The geometry of the BPPPYA was fully optimized, vibrational spectra were calculated and fundamental vibrations were assigned on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method and PQS program. The results of the energy and oscillator strength calculated by time-dependent density functional theory (TD-DFT) and CIS approach complement with the experimental findings. Total and partial density of state (TDOS and PDOS) and also overlap population density of state (COOP or OPDOS) diagrams analysis were presented. The theoretical NMR chemical shifts (1H and 13C) complement with experimentally measured ones. The dipole moment, linear polarizability and first hyperpolarizability values were also computed. The linear polarizabilities and first hyper polarizabilities of the studied molecule indicate that the compound is a good candidate of nonlinear optical materials. The calculated vibrational wavenumbers, absorption wavelengths and chemical shifts showed the best agreement with the experimental results.

  13. First principles electron-correlated calculations of optical absorption in magnesium clusters★

    NASA Astrophysics Data System (ADS)

    Shinde, Ravindra; Shukla, Alok

    2017-11-01

    In this paper, we report large-scale configuration interaction (CI) calculations of linear optical absorption spectra of various isomers of magnesium clusters Mgn (n = 2-5), corresponding to valence transitions. Geometry optimization of several low-lying isomers of each cluster was carried out using coupled-cluster singles doubles (CCSD) approach, and these geometries were subsequently employed to perform ground and excited state calculations using either the full-CI (FCI) or the multi-reference singles-doubles configuration interaction (MRSDCI) approach, within the frozen-core approximation. Our calculated photoabsorption spectrum of magnesium dimer (Mg2) is in excellent agreement with the experiments both for peak positions, and intensities. Owing to the sufficiently inclusive electron-correlation effects, these results can serve as benchmarks against which future experiments, as well as calculations performed using other theoretical approaches, can be tested. Supplementary material in the form of one pdf fille available from the Journal web page at http://https://doi.org/10.1140/epjd/e2017-80356-6.

  14. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  15. Local elasticity map and plasticity in a model Lennard-Jones glass.

    PubMed

    Tsamados, Michel; Tanguy, Anne; Goldenberg, Chay; Barrat, Jean-Louis

    2009-08-01

    In this work we calculate the local elastic moduli in a weakly polydispersed two-dimensional Lennard-Jones glass undergoing a quasistatic shear deformation at zero temperature. The numerical method uses coarse-grained microscopic expressions for the strain, displacement, and stress fields. This method allows us to calculate the local elasticity tensor and to quantify the deviation from linear elasticity (local Hooke's law) at different coarse-graining scales. From the results a clear picture emerges of an amorphous material with strongly spatially heterogeneous elastic moduli that simultaneously satisfies Hooke's law at scales larger than a characteristic length scale of the order of five interatomic distances. At this scale, the glass appears as a composite material composed of a rigid scaffolding and of soft zones. Only recently calculated in nonhomogeneous materials, the local elastic structure plays a crucial role in the elastoplastic response of the amorphous material. For a small macroscopic shear strain, the structures associated with the nonaffine displacement field appear directly related to the spatial structure of the elastic moduli. Moreover, for a larger macroscopic shear strain we show that zones of low shear modulus concentrate most of the strain in the form of plastic rearrangements. The spatiotemporal evolution of this local elasticity map and its connection with long term dynamical heterogeneity as well as with the plasticity in the material is quantified. The possibility to use this local parameter as a predictor of subsequent local plastic activity is also discussed.

  16. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  17. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  18. Linear and nonlinear spectroscopy from quantum master equations.

    PubMed

    Fetherolf, Jonathan H; Berkelbach, Timothy C

    2017-12-28

    We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.

  19. Linear and nonlinear spectroscopy from quantum master equations

    NASA Astrophysics Data System (ADS)

    Fetherolf, Jonathan H.; Berkelbach, Timothy C.

    2017-12-01

    We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.

  20. Accurate potential energy surface for the 1(2)A' state of NH(2): scaling of external correlation versus extrapolation to the complete basis set limit.

    PubMed

    Li, Y Q; Varandas, A J C

    2010-09-16

    An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.

  1. Power Laws, Scale Invariance and the Generalized Frobenius Series:

    NASA Astrophysics Data System (ADS)

    Visser, Matt; Yunes, Nicolas

    We present a self-contained formalism for calculating the background solution, the linearized solutions and a class of generalized Frobenius-like solutions to a system of scale-invariant differential equations. We first cast the scale-invariant model into its equidimensional and autonomous forms, find its fixed points, and then obtain power-law background solutions. After linearizing about these fixed points, we find a second linearized solution, which provides a distinct collection of power laws characterizing the deviations from the fixed point. We prove that generically there will be a region surrounding the fixed point in which the complete general solution can be represented as a generalized Frobenius-like power series with exponents that are integer multiples of the exponents arising in the linearized problem. While discussions of the linearized system are common, and one can often find a discussion of power-series with integer exponents, power series with irrational (indeed complex) exponents are much rarer in the extant literature. The Frobenius-like series we encounter can be viewed as a variant of the rarely-discussed Liapunov expansion theorem (not to be confused with the more commonly encountered Liapunov functions and Liapunov exponents). As specific examples we apply these ideas to Newtonian and relativistic isothermal stars and construct two separate power series with the overlapping radius of convergence. The second of these power series solutions represents an expansion around "spatial infinity," and in realistic models it is this second power series that gives information about the stellar core, and the damped oscillations in core mass and core radius as the central pressure goes to infinity. The power-series solutions we obtain extend classical results; as exemplified for instance by the work of Lane, Emden, and Chandrasekhar in the Newtonian case, and that of Harrison, Thorne, Wakano, and Wheeler in the relativistic case. We also indicate how to extend these ideas to situations where fixed points may not exist — either due to "monotone" flow or due to the presence of limit cycles. Monotone flow generically leads to logarithmic deviations from scaling, while limit cycles generally lead to discrete self-similar solutions.

  2. A model for tides and currents in the English Channel and southern North Sea

    NASA Astrophysics Data System (ADS)

    Walters, Roy. A.

    The amplitude and phase of 11 tidal constituents for the English Channel and southern North Sea are calculated using a frequency domain, finite element model. The governing equations — the shallow water equations — are modifed such that sea level is calculated using an elliptic equation of the Helmholz type followed by a back-calculation of velocity using the primitive momentum equations. Triangular elements with linear basis functions are used. The modified form of the governing equations provides stable solutions with little numerical noise. In this field-scale test problem, the model was able to produce the details of the structure of 11 tidal constituents including O 1, K 1, M 2, S 2, N 2, K 2, M 4, MS 4, MN 4, M 6, and 2MS 6.

  3. Development of social capital scale from a national longitudinal survey and examination of its validity and reliability.

    PubMed

    Aiba, Miyuki; Tachikawa, Hirokazu; Nakamine, Shin; Takahashi, Sho; Noguchi, Haruko; Takahashi, Hideto; Tamiya, Nanako

    2017-01-01

    Objectives Social capital consists of two subordinate concepts; first one is structural formal, structural informal, or cognitive and second one is bonding or bridging. This study was designed to develop a social capital scale using samples from a national longitudinal survey and evaluate the validity and test-retest reliability of the scale.Methods Data were collected from a nationwide panel survey, the "Longitudinal Survey of Middle-aged and Elderly Persons." Individuals aged 50-59 years living in Japan were selected by stratified random sampling in the first wave conducted in 2005. The first (n=34,240) and second (n=32,285) sets of data were used for Phase 1, and the sixth (n=26,220) and seventh (n=25,321) sets of data were used for Phase 2. In regard to first subordinate concept, the occurrence of six selected social activities with "neighborhood association" and "NPOs, or Public Interest Corporations" were calculated as the structural formal index, and the occurrence of six selected social activities with "families or friends" and "colleagues" were calculated as the structural informal index. Moreover, satisfaction with social activities (community activities, support for the elderly, and others) was used as the cognitive index. In regard to second subordinate concept, the bonding index was calculated using "families or friends," "colleagues," and "neighborhood association;" the bridging index was calculated using "NPOs or Public Interest Corporations." The diagnoses of heart disease, stroke, and cancer (yes=1, no=0) and self-rated health (1 item, 6-point scale) were used as variables for determining validity.Results We categorized social capital indices into subordinate concepts based on the construct of social capital defined by professional agreement to assess content validity. The results showed that this survey questionnaire was constructed using items that assessed all the subordinate concepts. Hierarchical Linear Modeling examined the relationship between social capital and health as assessed by diagnoses of physical disease and self-rated health to examine convergent validity, which indicated that all social capital indices had significant positive effects on self-rated health at an individual or group level. However, the diagnosis of a stroke was negatively influenced by cognitive and formal social capital indices at a group level, whereas heart disease and cancer were not significantly affected. Multilevel correlation analyses of Phase 1 (the first and second) and Phase 2 (sixth and seventh) were conducted to assess test-retest reliability, which indicated correlation coefficients of 0.392 to 0.999.Conclusion The findings of this study indicated the content validity of the scale that was developed from the national longitudinal survey. Moreover, results of Hierarchical Linear Modeling confirmed the partial convergent validity of the scale. Furthermore, multilevel correlation analyses demonstrated the adequate test-retest reliability of the scale at the group level.

  4. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  5. Transport Coefficients in weakly compressible turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Erlebacher, Gordon

    1996-01-01

    A theory of transport coefficients in weakly compressible turbulence is derived by applying Yoshizawa's two-scale direct interaction approximation to the compressible equations of motion linearized about a state of incompressible turbulence. The result is a generalization of the eddy viscosity representation of incompressible turbulence. In addition to the usual incompressible eddy viscosity, the calculation generates eddy diffusivities for entropy and pressure, and an effective bulk viscosity acting on the mean flow. The compressible fluctuations also generate an effective turbulent mean pressure and corrections to the speed of sound. Finally, a prediction unique to Yoshizawa's two-scale approximation is that terms containing gradients of incompressible turbulence quantities also appear in the mean flow equations. The form these terms take is described.

  6. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  7. Reliability measures in item response theory: manifest versus latent correlation functions.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Verbeke, Geert; De Boeck, Paul

    2015-02-01

    For item response theory (IRT) models, which belong to the class of generalized linear or non-linear mixed models, reliability at the scale of observed scores (i.e., manifest correlation) is more difficult to calculate than latent correlation based reliability, but usually of greater scientific interest. This is not least because it cannot be calculated explicitly when the logit link is used in conjunction with normal random effects. As such, approximations such as Fisher's information coefficient, Cronbach's α, or the latent correlation are calculated, allegedly because it is easy to do so. Cronbach's α has well-known and serious drawbacks, Fisher's information is not meaningful under certain circumstances, and there is an important but often overlooked difference between latent and manifest correlations. Here, manifest correlation refers to correlation between observed scores, while latent correlation refers to correlation between scores at the latent (e.g., logit or probit) scale. Thus, using one in place of the other can lead to erroneous conclusions. Taylor series based reliability measures, which are based on manifest correlation functions, are derived and a careful comparison of reliability measures based on latent correlations, Fisher's information, and exact reliability is carried out. The latent correlations are virtually always considerably higher than their manifest counterparts, Fisher's information measure shows no coherent behaviour (it is even negative in some cases), while the newly introduced Taylor series based approximations reflect the exact reliability very closely. Comparisons among the various types of correlations, for various IRT models, are made using algebraic expressions, Monte Carlo simulations, and data analysis. Given the light computational burden and the performance of Taylor series based reliability measures, their use is recommended. © 2014 The British Psychological Society.

  8. Effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio.

    PubMed

    Gomelsky, B; Schneider, K J; Glennon, R P; Plouffe, D A

    2012-09-01

    The effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio was investigated. To obtain diploid and triploid linear fish, eggs taken from a leather C. carpio female (genotype ssNn) and sperm taken from a scaled C. carpio male (genotype SSnn) were used for the production of control (no shock) and heat-shocked progeny. In heat-shocked progeny, the 2 min heat shock (40° C) was applied 6 min after insemination. Diploid linear fish (genotype SsNn) demonstrated a scale-cover pattern typical for this category with one even row of scales along lateral line and few scales located near operculum and at bases of fins. The majority (97%) of triploid linear fish (genotype SssNnn) exhibited non-typical scale patterns which were characterized by the appearance of additional scales on the body. The extent of additional scales in triploid linear fish was variable; some fish had large scales, which covered almost the entire body. Apparently, the observed difference in scale-cover pattern between triploid and diploid linear fish was caused by different phenotypic expression of gene N/n. Due to incomplete dominance of allele N, triploids Nnn demonstrate less profound reduction of scale cover compared with diploids Nn. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  9. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  10. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  11. Structural response calculations for a reverse ballistics test of an earth penetrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, D.F.; Goudreau, G.L.

    1976-08-01

    A dynamic response calculation has been performed on a half-scale earth penetrator to be tested on a reverse ballistics test in Aug. 1976. In this test a 14 in. dia sandstone target is fired at the EP at 1800 ft/sec at normal impact. Basically two types of calculations were made. The first utilized an axisymmetric, finite element code DTVIS2 in the dynamic mode and with materials having linear elastic properties. CRT's radial and axial force histories were smoothed to eliminate grid encounter frequency and applied to the nodal points along the nose of the penetrator. Given these inputs DTVIS2 thenmore » calculated the internal dynamic response. Secondly, SAP4, a structural analysis code, is utilized to calculate axial frequencies and mode shapes of the structure. A special one dimensional display facilitates interpretation of the mode shape. DTVIS2 and SAP4 use a common mesh description. Special considerations in the calculation are the assessment of the effect of gaps and preload and the internal axial sliding of components.« less

  12. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  13. A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.

    PubMed

    Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton

    2015-06-11

    We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.

  14. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  15. Polarizabilities and van der Waals C{sub 6} coefficients of fullerenes from an atomistic electrodynamics model: Anomalous scaling with number of carbon atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saidi, Wissam A., E-mail: alsaidi@pitt.edu; Norman, Patrick

    2016-07-14

    The van der Waals C{sub 6} coefficients of fullerenes are shown to exhibit an anomalous dependence on the number of carbon atoms N such that C{sub 6} ∝ N{sup 2.2} as predicted using state-of-the-art quantum mechanical calculations based on fullerenes with small sizes, and N{sup 2.75} as predicted using a classical-metallic spherical-shell approximation of the fullerenes. We use an atomistic electrodynamics model where each carbon atom is described by a polarizable object to extend the quantum mechanical calculations to larger fullerenes. The parameters of this model are optimized to describe accurately the static and complex polarizabilities of the fullerenes bymore » fitting against accurate ab initio calculations. This model shows that C{sub 6} ∝ N{sup 2.8}, which is supportive of the classical-metallic spherical-shell approximation. Additionally, we show that the anomalous dependence of the polarizability on N is attributed to the electric charge term, while the dipole–dipole term scales almost linearly with the number of carbon atoms.« less

  16. Swimming of a linear chain with a cargo in an incompressible viscous fluid with inertia

    NASA Astrophysics Data System (ADS)

    Felderhof, B. U.

    2017-01-01

    An approximation to the added mass matrix of an assembly of spheres is constructed on the basis of potential flow theory for situations where one sphere is much larger than the others. In the approximation, the flow potential near a small sphere is assumed to be dipolar, but near the large sphere it involves all higher order multipoles. The analysis is based on an exact result for the potential of a magnetic dipole in the presence of a superconducting sphere. Subsequently, the approximate added mass hydrodynamic interactions are used in a calculation of the swimming velocity and rate of dissipation of linear chain structures consisting of a number of small spheres and a single large one, with account also of frictional hydrodynamic interactions. The results derived for periodic swimming on the basis of a kinematic approach are compared with the bilinear theory, valid for small amplitude of stroke, and with the numerical solution of the approximate equations of motion. The calculations interpolate over the whole range of scale number between the friction-dominated Stokes limit and the inertia-dominated regime.

  17. Percolation Thresholds in Angular Grain media: Drude Directed Infiltration

    NASA Astrophysics Data System (ADS)

    Priour, Donald

    Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.

  18. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

    NASA Astrophysics Data System (ADS)

    Schneider, Adam; Flanner, Mark; Perket, Justin

    2018-02-01

    Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

  19. Poster — Thur Eve — 28: Enabling trajectory-based radiotherapy on a TrueBeam accelerator with the Eclipse treatment planning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mullins, J; Asiev, K; DeBlois, F

    2014-08-15

    The TrueBeam linear accelerator platform has a developer's mode which permits the user dynamic control over many of the machine's mechanical and radiation systems. Using this research tool, synchronous couch and gantry motion can be programmed to simulate isocentric treatment with a shortened SAD, with benefits such as smaller projected MLC leaf widths and an increased dose rate. In this work, water tank measurements were used to commission a virtual linear accelerator with an 85 cm SAD in Eclipse, from which several arc-based radiotherapy treatments were generated, including an inverse optimized VMAT delivery. For each plan, the pertinent treatment deliverymore » information was extracted from control points specified in the Eclipse-exported DICOM files using the pydicom package in Python, allowing construction of an XML control file. The dimensions of the jaws and MLC positions, defined for an 85 cm SAD in Eclipse, were scaled for delivery on a conventional SAD linear accelerator, and translational couch motion was added as a function of gantry angle to simulate delivery at 85 cm SAD. Ionization chamber and Gafchromic film measurements were used to compare the radiation delivery to dose calculations in Eclipse. With the exception of the VMAT delivery, ionization chamber measurements agreed within 3.3% of the Eclipse calculations. For the VMAT delivery, the ionization chamber was located in an inhomogeneous region, but gamma evaluation of the Gafchromic film plane resulted in a 94.5% passing rate using criteria of 3 mm/3%. The results indicate that Eclipse calculation infrastructure can be used.« less

  20. On the estimation and detection of the Rees-Sciama effect

    NASA Astrophysics Data System (ADS)

    Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.

    2017-02-01

    Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.

  1. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  2. Non-linearities in Holocene floodplain sediment storage

    NASA Astrophysics Data System (ADS)

    Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten

    2013-04-01

    Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows that a strong multifractality is present in the scaling relationship between sediment storage and catchment area, depending on geomorphic landscape properties. Extrapolation of data from one spatial scale to another inevitably leads to large errors: when only the data of the upper floodplains are considered, a regression analysis results in an overestimation of total floodplain deposition for the entire catchment of circa 115%. This example demonstrates multifractality and related non-linearity in scaling relationships, which influences extrapolations beyond the initial range of measurements. These different examples indicate how traditional extrapolation techniques and assumptions in sediment budget studies can be challenged by field data, further complicating our understanding of these systems. Although simplifications are often necessary when working on large spatial scale, such non-linearities may form challenges for a better understanding of system behavior.

  3. Mechanistic insights into heterogeneous methane activation

    DOE PAGES

    Latimer, Allegra A.; Aljama, Hassan; Kakekhani, Arvin; ...

    2017-01-11

    While natural gas is an abundant chemical fuel, its low volumetric energy density has prompted a search for catalysts able to transform methane into more useful chemicals. This search has often been aided through the use of transition state (TS) scaling relationships, which estimate methane activation TS energies as a linear function of a more easily calculated descriptor, such as final state energy, thus avoiding tedious TS energy calculations. It has been shown that methane can be activated via a radical or surface-stabilized pathway, both of which possess a unique TS scaling relationship. Herein, we present a simple model tomore » aid in the prediction of methane activation barriers on heterogeneous catalysts. Analogous to the universal radical TS scaling relationship introduced in a previous publication, we show that a universal TS scaling relationship that transcends catalysts classes also seems to exist for surface-stabilized methane activation if the relevant final state energy is used. We demonstrate that this scaling relationship holds for several reducible and irreducible oxides, promoted metals, and sulfides. By combining the universal scaling relationships for both radical and surface-stabilized methane activation pathways, we show that catalyst reactivity must be considered in addition to catalyst geometry to obtain an accurate estimation for the TS energy. Here, this model can yield fast and accurate predictions of methane activation barriers on a wide range of catalysts, thus accelerating the discovery of more active catalysts for methane conversion.« less

  4. Mechanistic insights into heterogeneous methane activation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latimer, Allegra A.; Aljama, Hassan; Kakekhani, Arvin

    While natural gas is an abundant chemical fuel, its low volumetric energy density has prompted a search for catalysts able to transform methane into more useful chemicals. This search has often been aided through the use of transition state (TS) scaling relationships, which estimate methane activation TS energies as a linear function of a more easily calculated descriptor, such as final state energy, thus avoiding tedious TS energy calculations. It has been shown that methane can be activated via a radical or surface-stabilized pathway, both of which possess a unique TS scaling relationship. Herein, we present a simple model tomore » aid in the prediction of methane activation barriers on heterogeneous catalysts. Analogous to the universal radical TS scaling relationship introduced in a previous publication, we show that a universal TS scaling relationship that transcends catalysts classes also seems to exist for surface-stabilized methane activation if the relevant final state energy is used. We demonstrate that this scaling relationship holds for several reducible and irreducible oxides, promoted metals, and sulfides. By combining the universal scaling relationships for both radical and surface-stabilized methane activation pathways, we show that catalyst reactivity must be considered in addition to catalyst geometry to obtain an accurate estimation for the TS energy. Here, this model can yield fast and accurate predictions of methane activation barriers on a wide range of catalysts, thus accelerating the discovery of more active catalysts for methane conversion.« less

  5. Predicting survival time in noncurative patients with advanced cancer: a prospective study in China.

    PubMed

    Cui, Jing; Zhou, Lingjun; Wee, B; Shen, Fengping; Ma, Xiuqiang; Zhao, Jijun

    2014-05-01

    Accurate prediction of prognosis for cancer patients is important for good clinical decision making in therapeutic and care strategies. The application of prognostic tools and indicators could improve prediction accuracy. This study aimed to develop a new prognostic scale to predict survival time of advanced cancer patients in China. We prospectively collected items that we anticipated might influence survival time of advanced cancer patients. Participants were recruited from 12 hospitals in Shanghai, China. We collected data including demographic information, clinical symptoms and signs, and biochemical test results. Log-rank tests, Cox regression, and linear regression were performed to develop a prognostic scale. Three hundred twenty patients with advanced cancer were recruited. Fourteen prognostic factors were included in the prognostic scale: Karnofsky Performance Scale (KPS) score, pain, ascites, hydrothorax, edema, delirium, cachexia, white blood cell (WBC) count, hemoglobin, sodium, total bilirubin, direct bilirubin, aspartate aminotransferase (AST), and alkaline phosphatase (ALP) values. The score was calculated by summing the partial scores, ranging from 0 to 30. When using the cutoff points of 7-day, 30-day, 90-day, and 180-day survival time, the scores were calculated as 12, 10, 8, and 6, respectively. We propose a new prognostic scale including KPS, pain, ascites, hydrothorax, edema, delirium, cachexia, WBC count, hemoglobin, sodium, total bilirubin, direct bilirubin, AST, and ALP values, which may help guide physicians in predicting the likely survival time of cancer patients more accurately. More studies are needed to validate this scale in the future.

  6. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    PubMed

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  7. An area-preserving mapping in natural canonical coordinates for magnetic field line trajectories in the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Punjabi, Alkesh

    2009-11-01

    The new approach of integrating magnetic field line trajectories in natural canonical coordinates (Punjabi and Ali 2008 Phys. Plasmas 15 122502) in divertor tokamaks is used for the DIII-D tokamak (Luxon and Davis1985 Fusion Technol. 8 441). The equilibrium EFIT data (Evans et al 2004 Phys. Rev. Lett. 92 235003, Lao et al 2005 Fusion Sci. Technol. 48 968) for the DIII-D tokamak shot 115467 at 3000 ms is used to construct the equilibrium generating function (EGF) for the DIII-D in natural canonical coordinates. The EGF gives quite an accurate representation of the closed and open equilibrium magnetic surfaces near the separatrix, the separatrix, the position of the X-point and the poloidal magnetic flux inside the ideal separatrix in the DIII-D. The equilibrium safety factor q from the EGF is somewhat smaller than the DIII-D EFIT q profile. The equilibrium safety factor is calculated from EGF as described in the previous paper (Punjabi and Ali 2008 Phys. Plasmas 15 122502). Here the safety factor for the open surfaces in the DIII-D is calculated. A canonical transformation is used to construct a symplectic mapping for magnetic field line trajectories in the DIII-D in natural canonical coordinates. The map is explored in more detail in this work, and is used to calculate field line trajectories in the DIII-D tokamak. The continuous analogue of the map does not distort the DIII-D magnetic surfaces in different toroidal planes between successive iterations of the map. The map parameter k can represent effects of magnetic asymmetries in the DIII-D. These effects in the DIII-D are illustrated. The DIII-D map is then used to calculate stochastic broadening of the ideal separatrix from the topological noise and field errors, the low mn, the high mn and peeling-ballooning magnetic perturbations in the DIII-D. The width of the stochastic layer scales as 1/2 power of amplitude with a maximum deviation of 6% from the Boozer-Rechester scaling (Boozer and Rechester 1978 Phys. Fluids 21 682). The loss of poloidal flux scales linearly with the amplitude of perturbation with a maximum deviation of 10% from linearity. Perturbations with higher mode numbers result in higher stochasticity. The higher the complexity and coupling in the equilibrium magnetic geometry, the closer is the scaling to the Boozer-Rechester scaling of width. The comparison of the EGF for the simple map (Punjabi et al 1992 Phys. Rev. Lett. 69 3322) with that of the DIII-D shows that the more complex the magnetic geometry and the more coupling of modes in equilibrium, the more robust or resilient is the system against the chaos-inducing, symmetry-breaking perturbations.

  8. SU-F-T-111: Investigation of the Attila Deterministic Solver as a Supplement to Monte Carlo for Calculating Out-Of-Field Radiotherapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Lee, C; Failla, G

    Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing averagemore » organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective epidemiological investigations.« less

  9. Phase transitions in coupled map lattices and in associated probabilistic cellular automata.

    PubMed

    Just, Wolfram

    2006-10-01

    Analytical tools are applied to investigate piecewise linear coupled map lattices in terms of probabilistic cellular automata. The so-called disorder condition of probabilistic cellular automata is closely related with attracting sets in coupled map lattices. The importance of this condition for the suppression of phase transitions is illustrated by spatially one-dimensional systems. Invariant densities and temporal correlations are calculated explicitly. Ising type phase transitions are found for one-dimensional coupled map lattices acting on repelling sets and for a spatially two-dimensional Miller-Huse-like system with stable long time dynamics. Critical exponents are calculated within a finite size scaling approach. The relevance of detailed balance of the resulting probabilistic cellular automaton for the critical behavior is pointed out.

  10. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    NASA Astrophysics Data System (ADS)

    O'Malley, C.; White, N. J.

    2016-12-01

    It is widely accepted that the stream power law can be used to describe the evolution of longitudinal river profiles. Over the last 5 years, this phenomenological law has been used to develop non-linear and linear inversion algorithms that enable uplift rate histories to be calculated by minimizing the misfit between observed and calculated river profiles. Substantial, continent-wide inventories of river profiles have been successfully inverted to yield uplift as a function of time and space. Erosional parameters can be determined by independent geological calibration. Our results help to illuminate empirical scaling laws that are well known to the geomorphological community. Here we present an analysis of river profiles from Asia. The timing and magnitude of uplift events across Asia, including the Himalayas and Tibet, have long been debated. River profile analyses have played an important role in clarifying the timing of uplift events. However, no attempt has yet been made to invert a comprehensive database of river profiles from the entire region. Asian rivers contain information which allows us to investigate putative uplift events quantitatively and to determine a cumulative uplift history for Asia. Long wavelength shapes of river profiles are governed by regional uplift and moderated by erosional processes. These processes are parameterised using the stream power law in the form of an advective-diffusive equation. Our non-negative, least-squares inversion scheme was applied to an inventory of 3722 Asian river profiles. We calibrate the key erosional parameters by predicting solid sedimentary flux for a set of Asian rivers and by comparing the flux predictions against published depositional histories for major river deltas. The resultant cumulative uplift history is compared with a range of published geological constraints for uplift and palaeoelevation. We have found good agreement for many regions across Asia. Surprisingly, single values of erosional constants can be shown to produce reliable uplift histories. However, these erosional constants appear to vary from continent to continent. Future work will investigate the global relationship between our inversion results, scaling laws, climate models, lithological variation and sedimentary flux.

  11. Scaling up stomatal conductance from leaf to canopy using a dual-leaf model for estimating crop evapotranspiration.

    PubMed

    Ding, Risheng; Kang, Shaozhong; Du, Taisheng; Hao, Xinmei; Zhang, Yanqun

    2014-01-01

    The dual-source Shuttleworth-Wallace model has been widely used to estimate and partition crop evapotranspiration (λET). Canopy stomatal conductance (Gsc), an essential parameter of the model, is often calculated by scaling up leaf stomatal conductance, considering the canopy as one single leaf in a so-called "big-leaf" model. However, Gsc can be overestimated or underestimated depending on leaf area index level in the big-leaf model, due to a non-linear stomatal response to light. A dual-leaf model, scaling up Gsc from leaf to canopy, was developed in this study. The non-linear stomata-light relationship was incorporated by dividing the canopy into sunlit and shaded fractions and calculating each fraction separately according to absorbed irradiances. The model includes: (1) the absorbed irradiance, determined by separately integrating the sunlit and shaded leaves with consideration of both beam and diffuse radiation; (2) leaf area for the sunlit and shaded fractions; and (3) a leaf conductance model that accounts for the response of stomata to PAR, vapor pressure deficit and available soil water. In contrast to the significant errors of Gsc in the big-leaf model, the predicted Gsc using the dual-leaf model had a high degree of data-model agreement; the slope of the linear regression between daytime predictions and measurements was 1.01 (R2 = 0.98), with RMSE of 0.6120 mm s-1 for four clear-sky days in different growth stages. The estimates of half-hourly λET using the dual-source dual-leaf model (DSDL) agreed well with measurements and the error was within 5% during two growing seasons of maize with differing hydrometeorological and management strategies. Moreover, the estimates of soil evaporation using the DSDL model closely matched actual measurements. Our results indicate that the DSDL model can produce more accurate estimation of Gsc and λET, compared to the big-leaf model, and thus is an effective alternative approach for estimating and partitioning λET.

  12. Scaling Up Stomatal Conductance from Leaf to Canopy Using a Dual-Leaf Model for Estimating Crop Evapotranspiration

    PubMed Central

    Ding, Risheng; Kang, Shaozhong; Du, Taisheng; Hao, Xinmei; Zhang, Yanqun

    2014-01-01

    The dual-source Shuttleworth-Wallace model has been widely used to estimate and partition crop evapotranspiration (λET). Canopy stomatal conductance (Gsc), an essential parameter of the model, is often calculated by scaling up leaf stomatal conductance, considering the canopy as one single leaf in a so-called “big-leaf” model. However, Gsc can be overestimated or underestimated depending on leaf area index level in the big-leaf model, due to a non-linear stomatal response to light. A dual-leaf model, scaling up Gsc from leaf to canopy, was developed in this study. The non-linear stomata-light relationship was incorporated by dividing the canopy into sunlit and shaded fractions and calculating each fraction separately according to absorbed irradiances. The model includes: (1) the absorbed irradiance, determined by separately integrating the sunlit and shaded leaves with consideration of both beam and diffuse radiation; (2) leaf area for the sunlit and shaded fractions; and (3) a leaf conductance model that accounts for the response of stomata to PAR, vapor pressure deficit and available soil water. In contrast to the significant errors of Gsc in the big-leaf model, the predicted Gsc using the dual-leaf model had a high degree of data-model agreement; the slope of the linear regression between daytime predictions and measurements was 1.01 (R2 = 0.98), with RMSE of 0.6120 mm s−1 for four clear-sky days in different growth stages. The estimates of half-hourly λET using the dual-source dual-leaf model (DSDL) agreed well with measurements and the error was within 5% during two growing seasons of maize with differing hydrometeorological and management strategies. Moreover, the estimates of soil evaporation using the DSDL model closely matched actual measurements. Our results indicate that the DSDL model can produce more accurate estimation of Gsc and λET, compared to the big-leaf model, and thus is an effective alternative approach for estimating and partitioning λET. PMID:24752329

  13. Mapping the Dark Matter with 6dFGS

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.

    2012-05-01

    Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.

  14. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  15. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  16. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  17. Calculating hyperfine couplings in large ionic crystals containing hundreds of QM atoms: subsystem DFT is the key.

    PubMed

    Kevorkyants, Ruslan; Wang, Xiqiao; Close, David M; Pavanello, Michele

    2013-11-14

    We present an application of the linear scaling frozen density embedding (FDE) formulation of subsystem DFT to the calculation of isotropic hyperfine coupling constants (hfcc's) of atoms belonging to a guanine radical cation embedded in a guanine hydrochloride monohydrate crystal. The model systems range from an isolated guanine to a 15,000 atom QM/MM cluster where the QM region is comprised of 36 protonated guanine cations, 36 chlorine anions, and 42 water molecules. Our calculations show that the embedding effects of the surrounding crystal cannot be reproduced by small model systems nor by a pure QM/MM procedure. Instead, a large QM region is needed to fully capture the complicated nature of the embedding effects in this system. The unprecedented system size for a relativistic all-electron isotropic hfcc calculation can be approached in this work because the local nature of the electronic structure of the organic crystals considered is fully captured by the FDE approach.

  18. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  19. Numerical study on wave-induced beam ion prompt losses in DIII-D tokamak

    DOE PAGES

    Feng, Zhichen; Zhu, Jia; Fu, Guo -Yong; ...

    2017-08-30

    A numerical study is performed on the coherent beam ion prompt losses driven by Alfven eigenmodes (AEs) in DIII-D plasmas using realistic parameters and beam ion deposition profiles. The synthetic signal of a fast-ion loss detector (FILD) is calculated for a single AE mode. The first harmonic of the calculated FILD signal is linearly proportional to the AE amplitude with the same AE frequency in agreement with the experimental measurement. The calculated second harmonic is proportional to the square of the first harmonic for typical AE amplitudes. The coefficient of quadratic scaling is found to be sensitive to the AEmore » mode width. The second part of this work considers the AE drive due to coherent prompt loss. As a result, it is shown that the loss-induced mode drive is much smaller than the previous estimate and can be ignored for mode stability.« less

  20. Stability properties and fast ion confinement of hybrid tokamak plasma configurations

    NASA Astrophysics Data System (ADS)

    Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.

    2015-11-01

    In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.

  1. Is there a stable B2Π state for the CNO molecule?

    NASA Astrophysics Data System (ADS)

    Marian, Christel; Hess, Bernd A.; Schöttke, Sigrid; Buenker, Robert J.

    1987-07-01

    We report MRD-CI calculations on the ground state X2Π and the excited states A2Σ + and B2Π of the CNO molecule in linear geometry. The surfaces for oxygen and carbon extraction are calculated using a limited CI expansion of 47 configuration state functions; in the vicinity of the minima obtained with this procedure large-scale CI calculations are carried out including deter-mination of the spin-orbit splitting of the 2Π states of the minima. We find that the B2Π state will be difficult to detect spectroscopically due to an avoided crossing just at the equilibrium geometry of the ground state at RCN = 2.25 a.u., RNO = 2.30 a.u. Accordingly we find two shallow minima for B2Π at RCN = 2.33 a.u., RNO = 2.91 a.u. and RCN = 2.78 a.u., RNO = 2.28 a.u., respectively.

  2. The Zeldovich approximation and wide-angle redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Castorina, Emanuele; White, Martin

    2018-06-01

    The contribution of line-of-sight peculiar velocities to the observed redshift of objects breaks the translational symmetry of the underlying theory, modifying the predicted 2-point functions. These `wide angle effects' have mostly been studied using linear perturbation theory in the context of the multipoles of the correlation function and power spectrum . In this work we present the first calculation of wide angle terms in the Zeldovich approximation, which is known to be more accurate than linear theory on scales probed by the next generation of galaxy surveys. We present the exact result for dark matter and perturbatively biased tracers as well as the small angle expansion of the configuration- and Fourier-space two-point functions and the connection to the multi-frequency angular power spectrum. We compare different definitions of the line-of-sight direction and discuss how to translate between them. We show that wide angle terms can reach tens of percent of the total signal in a measurement at low redshift in some approximations, and that a generic feature of wide angle effects is to slightly shift the Baryon Acoustic Oscillation scale.

  3. Computational investigation of large-scale vortex interaction with flexible bodies

    NASA Astrophysics Data System (ADS)

    Connell, Benjamin; Yue, Dick K. P.

    2003-11-01

    The interaction of large-scale vortices with flexible bodies is examined with particular interest paid to the energy and momentum budgets of the system. Finite difference direct numerical simulation of the Navier-Stokes equations on a moving curvilinear grid is coupled with a finite difference structural solver of both a linear membrane under tension and linear Euler-Bernoulli beam. The hydrodynamics and structural dynamics are solved simultaneously using an iterative procedure with the external structural forcing calculated from the hydrodynamics at the surface and the flow-field velocity boundary condition given by the structural motion. We focus on an investigation into the canonical problem of a vortex-dipole impinging on a flexible membrane. It is discovered that the structural properties of the membrane direct the interaction in terms of the flow evolution and the energy budget. Pressure gradients associated with resonant membrane response are shown to sustain the oscillatory motion of the vortex pair. Understanding how the key mechanisms in vortex-body interactions are guided by the structural properties of the body is a prerequisite to exploiting these mechanisms.

  4. A linear framework for time-scale separation in nonlinear biochemical systems.

    PubMed

    Gunawardena, Jeremy

    2012-01-01

    Cellular physiology is implemented by formidably complex biochemical systems with highly nonlinear dynamics, presenting a challenge for both experiment and theory. Time-scale separation has been one of the few theoretical methods for distilling general principles from such complexity. It has provided essential insights in areas such as enzyme kinetics, allosteric enzymes, G-protein coupled receptors, ion channels, gene regulation and post-translational modification. In each case, internal molecular complexity has been eliminated, leading to rational algebraic expressions among the remaining components. This has yielded familiar formulas such as those of Michaelis-Menten in enzyme kinetics, Monod-Wyman-Changeux in allostery and Ackers-Johnson-Shea in gene regulation. Here we show that these calculations are all instances of a single graph-theoretic framework. Despite the biochemical nonlinearity to which it is applied, this framework is entirely linear, yet requires no approximation. We show that elimination of internal complexity is feasible when the relevant graph is strongly connected. The framework provides a new methodology with the potential to subdue combinatorial explosion at the molecular level.

  5. Cost-effectiveness analysis of the diarrhea alleviation through zinc and oral rehydration therapy (DAZT) program in rural Gujarat India: an application of the net-benefit regression framework.

    PubMed

    Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila

    2017-01-01

    This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.

  6. Another look at zonal flows: Resonance, shearing, and frictionless saturation

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Diamond, P. H.

    2018-04-01

    We show that shear is not the exclusive parameter that represents all aspects of flow structure effects on turbulence. Rather, wave-flow resonance enters turbulence regulation, both linearly and nonlinearly. Resonance suppresses the linear instability by wave absorption. Flow shear can weaken the resonance, and thus destabilize drift waves, in contrast to the near-universal conventional shear suppression paradigm. Furthermore, consideration of wave-flow resonance resolves the long-standing problem of how zonal flows (ZFs) saturate in the limit of weak or zero frictional drag, and also determines the ZF scale. We show that resonant vorticity mixing, which conserves potential enstrophy, enables ZF saturation in the absence of drag, and so is effective at regulating the Dimits up-shift regime. Vorticity mixing is incorporated as a nonlinear, self-regulation effect in an extended 0D predator-prey model of drift-ZF turbulence. This analysis determines the saturated ZF shear and shows that the mesoscopic ZF width scales as LZ F˜f3 /16(1-f ) 1 /8ρs5/8l03 /8 in the (relevant) adiabatic limit (i.e., τckk‖2D‖≫1 ). f is the fraction of turbulence energy coupled to ZF and l0 is the base state mixing length, absent ZF shears. We calculate and compare the stationary flow and turbulence level in frictionless, weakly frictional, and strongly frictional regimes. In the frictionless limit, the results differ significantly from conventionally quoted scalings derived for frictional regimes. To leading order, the flow is independent of turbulence intensity. The turbulence level scales as E ˜(γL/εc) 2 , which indicates the extent of the "near-marginal" regime to be γL<εc , for the case of avalanche-induced profile variability. Here, εc is the rate of dissipation of potential enstrophy and γL is the characteristic linear growth rate of fluctuations. The implications for dynamics near marginality of the strong scaling of saturated E with γL are discussed.

  7. Dispersion interactions in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash

    2012-02-01

    Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).

  8. Long-lived light mediator to dark matter and primordial small scale spectrum

    DOE PAGES

    Zhang, Yue

    2015-05-01

    We calculate the early universe evolution of perturbations in the dark matter energy density in the context of simple dark sector models containing a GeV scale light mediator. We consider the case that the mediator is long-lived, with lifetime up to a second, and before decaying it temporarily dominates the energy density of the universe. We show that for primordial perturbations that enter the horizon around this period, the interplay between linear growth during matter domination and collisional damping can generically lead to a sharp peak in the spectrum of dark matter density perturbation. Finally, as a result, the populationmore » of the smallest DM halos gets enhanced. Possible implications of this scenario are discussed.« less

  9. Non-linear non-local molecular electrodynamics with nano-optical fields.

    PubMed

    Chernyak, Vladimir Y; Saurabh, Prasoon; Mukamel, Shaul

    2015-10-28

    The interaction of optical fields sculpted on the nano-scale with matter may not be described by the dipole approximation since the fields may vary appreciably across the molecular length scale. Rather than incrementally adding higher multipoles, it is advantageous and more physically transparent to describe the optical process using non-local response functions that intrinsically include all multipoles. We present a semi-classical approach for calculating non-local response functions based on the minimal coupling Hamiltonian. The first, second, and third order response functions are expressed in terms of correlation functions of the charge and the current densities. This approach is based on the gauge invariant current rather than the polarization, and on the vector potential rather than the electric and magnetic fields.

  10. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  11. Relating Ab Initio Mechanical Behavior of Intergranular Glassy Films in Γ-Si3N4 to Continuum Scales

    NASA Astrophysics Data System (ADS)

    Ouyang, L.; Chen, J.; Ching, W.; Misra, A.

    2006-05-01

    Nanometer thin intergranular glassy films (IGFs) form in polycrystalline ceramics during sintering at high temperatures. The structure and properties of these IGFs are significantly changed by doping with rare earth elements. We have performed highly accurate large-scale ab initio calculations of the mechanical properties of both undoped and Yittria doped (Y-IGF) model by theoretical uniaxial tensile experiments. Uniaxial strain was applied by incrementally stretching the super cell in one direction, while the other two dimensions were kept constant. At each strain, all atoms in the model were fully relaxed using Vienna Ab initio Simulation Package VASP. The relaxed model at a given strain serves as the starting position for the next increment of strain. This process is carried on until the total energy (TE) and stress data show that the "sample" is fully fractured. Interesting differences are seen between the stress-strain response of undoped and Y-doped models. For the undoped model, the stress-strain behavior indicates that the initial atomic structure of the IGF is such that there is negligible coupling between the x- and the y-z directions. However, once the behavior becomes non- linear the lateral stresses increase, indicating that the atomic structure evolves with loading [1]. To relate the ab initio calculations to the continuum scales we analyze the atomic-scale deformation field under this uniaxial loading [1]. The applied strain in the x-direction is mostly accommodated by the IGF part of the model and the crystalline part experiences almost negligible strain. As the overall strain on the sample is incrementally increased, the local strain field evolves such that locations proximal to the softer spots attract higher strains. As the load progresses, the strain concentration spots coalesce and eventually form persistent strain localization zone across the IGF. The deformation pattern obtained through ab initio calculations indicates that it is possible to construct discrete grain-scale models that may be used to bridge these calculations to the continuum scale for finite element analysis. Reference: 1. J. Chen, L. Ouyang, P. Rulis, A. Misra, W. Y. Ching, Phys. Rev. Lett. 95, 256103 (2005)

  12. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  13. New Method for Solving Inductive Electric Fields in the Ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  14. Spectroscopic (FT-IR, FT-Raman and UV) investigation, NLO, NBO, molecular orbital and MESP analysis of 2-{2-[(2,6-dichlorophenyl)amino]phenyl}acetic acid

    NASA Astrophysics Data System (ADS)

    Govindasamy, P.; Gunasekaran, S.

    2015-02-01

    In this work, FT-IR and FT-Raman spectra of 2-{2-[(2,6-dichlorophenyl)amino]phenyl}acetic acid (abbreviated as 2DCPAPAA) have been reported in the regions 4000-450 cm-1 and 4000-50 cm-1, respectively. The molecular structure, geometry optimization, intensities, vibrational frequencies were obtained by the ab initio and DFT levels of theory B3LYP with 6-311++G(d,p) standard basis set and a different scaling of the calculated wave numbers. The complete vibrational assignments were performed on the basis of the potential energy distribution (PED) of the vibrational modes calculated using vibrational energy distribution analysis (VEDA 4) program. The harmonic frequencies were calculated and the scaled values were compared with experimental FT-IR and FT-Raman data. The observed and the calculated frequencies are found to be in good agreement. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The thermodynamic properties of the title compound at different temperature reveal the correlations between standard heat capacities (C) standard entropies (S) standard enthalpy changes (ΔH). The important non-linear optical properties such as electric dipole momentum, polarizability and first hyperpolarizability of 2DCPAPAA have been computed using B3LYP/6-311++G(d,p) quantum chemical calculations. The Natural charges, HOMO, LUMO, chemical hardness (η), chemical potential (μ), Electro negativity (χ) and electrophilicity values (ω) are calculated and reported. The oscillator's strength, wave length, and energy calculated by TD-DFT and 2DCPAPAA is approach complement with the experimental findings. The molecular electrostatic potential (MESP) surfaces of the molecule were constructed.

  15. Spectroscopic (FT-IR, FT-Raman and UV) investigation, NLO, NBO, molecular orbital and MESP analysis of 2-{2-[(2,6-dichlorophenyl)amino]phenyl}acetic acid.

    PubMed

    Govindasamy, P; Gunasekaran, S

    2015-02-05

    In this work, FT-IR and FT-Raman spectra of 2-{2-[(2,6-dichlorophenyl)amino]phenyl}acetic acid (abbreviated as 2DCPAPAA) have been reported in the regions 4000-450cm(-1) and 4000-50cm(-1), respectively. The molecular structure, geometry optimization, intensities, vibrational frequencies were obtained by the ab initio and DFT levels of theory B3LYP with 6-311++G(d,p) standard basis set and a different scaling of the calculated wave numbers. The complete vibrational assignments were performed on the basis of the potential energy distribution (PED) of the vibrational modes calculated using vibrational energy distribution analysis (VEDA 4) program. The harmonic frequencies were calculated and the scaled values were compared with experimental FT-IR and FT-Raman data. The observed and the calculated frequencies are found to be in good agreement. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The thermodynamic properties of the title compound at different temperature reveal the correlations between standard heat capacities (C) standard entropies (S) standard enthalpy changes (ΔH). The important non-linear optical properties such as electric dipole momentum, polarizability and first hyperpolarizability of 2DCPAPAA have been computed using B3LYP/6-311++G(d,p) quantum chemical calculations. The Natural charges, HOMO, LUMO, chemical hardness (η), chemical potential (μ), Electro negativity (χ) and electrophilicity values (ω) are calculated and reported. The oscillator's strength, wave length, and energy calculated by TD-DFT and 2DCPAPAA is approach complement with the experimental findings. The molecular electrostatic potential (MESP) surfaces of the molecule were constructed. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Cosmic-string-induced hot dark matter perturbations

    NASA Technical Reports Server (NTRS)

    Van Dalen, Anthony

    1990-01-01

    This paper investigates the evolution of initially relativistic matter, radiation, and baryons around cosmic string seed perturbations. A detailed analysis of the linear evolution of spherical perturbations in a universe is carried out, and this formalism is used to study the evolution of perturbations around a sphere of uniform density and fixed radius, approximating a loop of cosmic string. It was found that, on scales less than a few megaparsec, the results agree with the nonrelativistic calculation of previous authors. On greater scales, there is a deviation approaching a factor of 2-3 in the perturbation mass. It is shown that a scenario with cosmic strings, hot dark matter, and a Hubble constant greater than 75 km/sec per Mpc can generally produce structure on the observed mass scales and at the appropriate time: 1 + z = about 4 for galaxies and 1 + z = about 1.5 for Abell clusters.

  17. Cosmic background radiation anisotropies in universes dominated by nonbaryonic dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. R.; Efstathiou, G.

    1984-01-01

    Detailed calculations of the temperature fluctuations in the cosmic background radiation for universes dominated by massive collisionless relics of the big bang are presented. An initially adiabatic constant curvature perturbation spectrum is assumed. In models with cold dark matter, the simplest hypothesis - that galaxies follow the mass distribution leads to small-scale anisotropies which exceed current observational limits if omega is less than 0.2 h to the -4/3. Since low values of omega are indicated by dynamical studies of galaxy clustering, cold particle models in which light traces mass are probably incorrect. Reheating of the pregalactic medium is unlikely to modify this conclusion. In cold particle or neutrino-dominated universes with omega = 1, presented predictions for small-scale and quadrupole anisotropies are below current limits. In all cases, the small-scale fluctuations are predicted to be about 10 percent linearly polarized.

  18. Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha

    1991-01-01

    The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.

  19. Simulation of Shock-Shock Interaction in Parsec-Scale Jets

    NASA Astrophysics Data System (ADS)

    Fromm, Christian M.; Perucho, Manel; Ros, Eduardo; Mimica, Petar; Savolainen, Tuomas; Lobanov, Andrei P.; Zensus, J. Anton

    The analysis of the radio light curves of the blazar CTA 102 during its 2006 flare revealed a possible interaction between a standing shock wave and a traveling one. In order to better understand this highly non-linear process, we used a relativistic hydrodynamic code to simulate the high energy interaction and its related emission. The calculated synchrotron emission from these simulations showed an increase in turnover flux density, Sm, and turnover frequency, νm, during the interaction and decrease to its initial values after the passage of the traveling shock wave.

  20. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  1. Modeling Primary Atomization of Liquid Fuels using a Multiphase DNS/LES Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arienti, Marco; Oefelein, Joe; Doisneau, Francois

    2016-08-01

    As part of a Laboratory Directed Research and Development project, we are developing a modeling-and-simulation capability to study fuel direct injection in automotive engines. Predicting mixing and combustion at realistic conditions remains a challenging objective of energy science. And it is a research priority in Sandia’s mission-critical area of energy security, being also relevant to many flows in defense and climate. High-performance computing applied to this non-linear multi-scale problem is key to engine calculations with increased scientific reliability.

  2. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  3. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units.

    PubMed

    Maurer, S A; Kussmann, J; Ochsenfeld, C

    2014-08-07

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.

  4. Who Will Win?: Predicting the Presidential Election Using Linear Regression

    ERIC Educational Resources Information Center

    Lamb, John H.

    2007-01-01

    This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.…

  5. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  6. On the use of finite difference matrix-vector products in Newton-Krylov solvers for implicit climate dynamics with spectral elements

    DOE PAGES

    Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.

    2015-01-01

    Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less

  7. Perturbation theory for BAO reconstructed fields: One-loop results in the real-space matter density field

    NASA Astrophysics Data System (ADS)

    Hikage, Chiaki; Koyama, Kazuya; Heavens, Alan

    2017-08-01

    We compute the power spectrum at one-loop order in standard perturbation theory for the matter density field to which a standard Lagrangian baryonic acoustic oscillation (BAO) reconstruction technique is applied. The BAO reconstruction method corrects the bulk motion associated with the gravitational evolution using the inverse Zel'dovich approximation (ZA) for the smoothed density field. We find that the overall amplitude of one-loop contributions in the matter power spectrum substantially decreases after reconstruction. The reconstructed power spectrum thereby approaches the initial linear spectrum when the smoothed density field is close enough to linear, i.e., the smoothing scale Rs≳10 h-1 Mpc . On smaller Rs, however, the deviation from the linear spectrum becomes significant on large scales (k ≲Rs-1 ) due to the nonlinearity in the smoothed density field, and the reconstruction is inaccurate. Compared with N-body simulations, we show that the reconstructed power spectrum at one-loop order agrees with simulations better than the unreconstructed power spectrum. We also calculate the tree-level bispectrum in standard perturbation theory to investigate non-Gaussianity in the reconstructed matter density field. We show that the amplitude of the bispectrum significantly decreases for small k after reconstruction and that the tree-level bispectrum agrees well with N-body results in the weakly nonlinear regime.

  8. HT-FRTC: a fast radiative transfer code using kernel regression

    NASA Astrophysics Data System (ADS)

    Thelen, Jean-Claude; Havemann, Stephan; Lewis, Warren

    2016-09-01

    The HT-FRTC is a principal component based fast radiative transfer code that can be used across the electromagnetic spectrum from the microwave through to the ultraviolet to calculate transmittance, radiance and flux spectra. The principal components cover the spectrum at a very high spectral resolution, which allows very fast line-by-line, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. The principal components are derived during a code training phase from line-by-line simulations for a diverse set of atmosphere and surface conditions. The derived principal components are sensor independent, i.e. no extra training is required to include additional sensors. During the training phase we also derive the predictors which are required by the fast radiative transfer code to determine the principal component scores from the monochromatic radiances (or fluxes, transmittances). These predictors are calculated for each training profile at a small number of frequencies, which are selected by a k-means cluster algorithm during the training phase. Until recently the predictors were calculated using a linear regression. However, during a recent rewrite of the code the linear regression was replaced by a Gaussian Process (GP) regression which resulted in a significant increase in accuracy when compared to the linear regression. The HT-FRTC has been trained with a large variety of gases, surface properties and scatterers. Rayleigh scattering as well as scattering by frozen/liquid clouds, hydrometeors and aerosols have all been included. The scattering phase function can be fully accounted for by an integrated line-by-line version of the Edwards-Slingo spherical harmonics radiation code or approximately by a modification to the extinction (Chou scaling).

  9. Experimental and theoretical study of p-nitroacetanilide

    NASA Astrophysics Data System (ADS)

    Gnanasambandan, T.; Gunasekaran, S.; Seshadri, S.

    2014-01-01

    The spectroscopic properties of the p-nitroacetanilide (PNA) were examined by FT-IR, FT-Raman and UV-Vis techniques. FT-IR and FT-Raman spectra in solid state were observed in the region 4000-400 cm-1 and 3500-100 cm-1, respectively. The UV-Vis absorption spectrum of the compound that dissolved in ethanol was recorded in the range of 200-400 nm. The structural and spectroscopic data of the molecule in the ground state were calculated by using density functional theory (DFT) employing B3LYP methods with the 6-31G(d,p) and 6-311+G(d,p) basis sets. The geometry of the molecule was fully optimized, vibrational spectra were calculated and fundamental vibrations were assigned on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. Thermodynamic properties like entropy, heat capacity and enthalpy have been calculated for the molecule. HOMO-LUMO energy gap has been calculated. The intramolecular contacts have been interpreted using natural bond orbital (NBO) and natural localized molecular orbital (NLMO) analysis. Important non-linear optical (NLO) properties such as electric dipole moment and first hyperpolarizability have been computed using B3LYP quantum chemical calculation.

  10. Simulations of neutral wind shear effect on the equatorial ionosphere irregularities

    NASA Astrophysics Data System (ADS)

    Kim, J.; Chagelishvili, G.; Horton, W.

    2005-12-01

    We present numerical calculations of the large-scale electron density driven by the gradient drift instability in the daytime equatorial electrojet. Under two-fluid theory the linear analysis for kilometer scale waves lead to the result that all the perturbations are transformed to small scales through linear convection by shear and then damped by diffusion. The inclusion of the nonlinearity enables inverse energy cascade to provide energy to long scale. The feedback between velocity shear and nonlinearity keeps waves growing and leads to the turbulence. In strongly turbulent regime, the nonlinear states are saturated [1]. Since the convective nonlinearities are isotropic while the interactions of velocity shear with waves are anisotropic, the feedback do not necessarily enable waves to grow. The growth of waves are highly variable on k-space configuration [2]. Our simulations show that the directional relationship between vorticity of irregularities and shear are one of key factors. Thus during the transient period, the irregularities show the anisotropy of the vorticity power spectrum. We report the evolution of the power spectrum of the vorticity and density of irregularties and its anistropic nature as observed. The work was supported in part by the Department of NSF Grant ATM-0229863 and ISTC Grant G-553. C. Ronchi, R.N. Sudan, and D.T. Farley. Numerical simulations of large-scale plasma turbulece in teh day time equatorial electrojet. J. Geophys. Res., 96:21263--21279, 1991. G.D. Chagelishvili, R.G. Chanishvili, T.S. Hristov, and J.G. Lominadze. A turbulence model in unbounded smooth shear flows : The weak turbulence approach. JETP, 94(2):434--445, 2002.

  11. Shapes of strong shock fronts in an inhomogeneous solar wind

    NASA Technical Reports Server (NTRS)

    Heinemann, M. A.; Siscoe, G. L.

    1974-01-01

    The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.

  12. Electrostatic attraction between overall neutral surfaces.

    PubMed

    Adar, Ram M; Andelman, David; Diamant, Haim

    2016-08-01

    Two overall neutral surfaces with positively and negatively charged domains ("patches") have been shown in recent experiments to exhibit long-range attraction when immersed in an ionic solution. Motivated by the experiments, we calculate analytically the osmotic pressure between such surfaces within the Poisson-Boltzmann framework, using a variational principle for the surface-averaged free energy. The electrostatic potential, calculated beyond the linear Debye-Hückel theory, yields an overall attraction at large intersurface separations, over a wide range of the system's controlled length scales. In particular, the attraction is stronger and occurs at smaller separations for surface patches of larger size and charge density. In this large patch limit, we find that the attraction-repulsion crossover separation is inversely proportional to the square of the patch-charge density and to the Debye screening length.

  13. Verification of GENE and GYRO with L-mode and I-mode plasmas in Alcator C-Mod

    DOE PAGES

    Mikkelsen, D. R.; Howard, N. T.; White, A. E.; ...

    2018-04-25

    Here, verification comparisons are carried out for L-mode and I-mode plasma conditions in Alcator C-Mod. We compare linear and nonlinear ion-scale calculations by the gyrokinetic codes GENE and GYRO to each other and to the experimental power balance analysis. The two gyrokinetic codes' linear growth rates and real frequencies are in good agreement throughout all the ion temperature gradient mode branches and most of the trapped electron mode branches of the kyρs spectra at r/a = 0.65, 0.7, and 0.8. The shapes of the toroidal mode spectra of heat fluxes in nonlinear simulations are very similar for k yρ smore » ≤ 0.5, but in most cases GENE has a relatively higher heat flux than GYRO at higher mode numbers.« less

  14. The Power Spectrum of Ionic Nanopore Currents: The Role of Ion Correlations.

    PubMed

    Zorkot, Mira; Golestanian, Ramin; Bonthuis, Douwe Jan

    2016-04-13

    We calculate the power spectrum of electric-field-driven ion transport through nanometer-scale membrane pores using both linearized mean-field theory and Langevin dynamics simulations. Remarkably, the linearized mean-field theory predicts a plateau in the power spectral density at low frequency ω, which is confirmed by the simulations at low ion concentration. At high ion concentration, however, the power spectral density follows a power law that is reminiscent of the 1/ω(α) dependence found experimentally at low frequency. On the basis of simulations with and without ion-ion interactions, we attribute the low-frequency power-law dependence to ion-ion correlations. We show that neither a static surface charge density, nor an increased pore length, nor an increased ion valency have a significant effect on the shape of the power spectral density at low frequency.

  15. Verification of GENE and GYRO with L-mode and I-mode plasmas in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, D. R.; Howard, N. T.; White, A. E.

    Here, verification comparisons are carried out for L-mode and I-mode plasma conditions in Alcator C-Mod. We compare linear and nonlinear ion-scale calculations by the gyrokinetic codes GENE and GYRO to each other and to the experimental power balance analysis. The two gyrokinetic codes' linear growth rates and real frequencies are in good agreement throughout all the ion temperature gradient mode branches and most of the trapped electron mode branches of the kyρs spectra at r/a = 0.65, 0.7, and 0.8. The shapes of the toroidal mode spectra of heat fluxes in nonlinear simulations are very similar for k yρ smore » ≤ 0.5, but in most cases GENE has a relatively higher heat flux than GYRO at higher mode numbers.« less

  16. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  17. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  18. Patient and Societal Value Functions for the Testing Morbidities Index

    PubMed Central

    Swan, John Shannon; Kong, Chung Yin; Lee, Janie M.; Akinyemi, Omosalewa; Halpern, Elkan F.; Lee, Pablo; Vavinskiy, Sergey; Williams, Olubunmi; Zoltick, Emilie S.; Donelan, Karen

    2013-01-01

    Background We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure. Methods The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity. Results The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day. Conclusions The TMI shows initial promise in measuring short-term testing-related health states. PMID:23689044

  19. Modeling time-coincident ultrafast electron transfer and solvation processes at molecule-semiconductor interfaces

    NASA Astrophysics Data System (ADS)

    Li, Lesheng; Giokas, Paul G.; Kanai, Yosuke; Moran, Andrew M.

    2014-06-01

    Kinetic models based on Fermi's Golden Rule are commonly employed to understand photoinduced electron transfer dynamics at molecule-semiconductor interfaces. Implicit in such second-order perturbative descriptions is the assumption that nuclear relaxation of the photoexcited electron donor is fast compared to electron injection into the semiconductor. This approximation breaks down in systems where electron transfer transitions occur on 100-fs time scale. Here, we present a fourth-order perturbative model that captures the interplay between time-coincident electron transfer and nuclear relaxation processes initiated by light absorption. The model consists of a fairly small number of parameters, which can be derived from standard spectroscopic measurements (e.g., linear absorbance, fluorescence) and/or first-principles electronic structure calculations. Insights provided by the model are illustrated for a two-level donor molecule coupled to both (i) a single acceptor level and (ii) a density of states (DOS) calculated for TiO2 using a first-principles electronic structure theory. These numerical calculations show that second-order kinetic theories fail to capture basic physical effects when the DOS exhibits narrow maxima near the energy of the molecular excited state. Overall, we conclude that the present fourth-order rate formula constitutes a rigorous and intuitive framework for understanding photoinduced electron transfer dynamics that occur on the 100-fs time scale.

  20. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  1. Probabilistic measurement of non-physical constructs during early childhood: Epistemological implications for advancing psychosocial science

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Fatani, S. S.

    2010-07-01

    Social researchers commonly compute ordinal raw scores and ratings to quantify human aptitudes, attitudes, and abilities but without a clear understanding of their limitations for scientific knowledge. In this research, common ordinal measures were compared to higher order linear (equal interval) scale measures to clarify implications for objectivity, precision, ontological coherence, and meaningfulness. Raw score gains, residualized raw gains, and linear gains calculated with a Rasch model were compared between Time 1 and Time 2 for observations from two early childhood learning assessments. Comparisons show major inconsistencies between ratings and linear gains. When gain distribution was dense, relatively compact, and initial status near item mid-range, linear measures and ratings were indistinguishable. When Time 1 status was distributed more broadly and magnitude of change variable, ratings were unrelated to linear gain, which emphasizes problematic implications of ordinal measures. Surprisingly, residualized gain scores did not significantly improve ordinal measurement of change. In general, raw scores and ratings may be meaningful in specific samples to establish order and high/low rank, but raw score differences suffer from non-uniform units. Even meaningfulness of sample comparisons, as well as derived proportions and percentages, are seriously affected by rank order distortions and should be avoided.

  2. Synchronous fluorescence spectroscopic study of solvatochromic curcumin dye

    NASA Astrophysics Data System (ADS)

    Patra, Digambara; Barakat, Christelle

    2011-09-01

    Curcumin, the main yellow bioactive component of turmeric, has recently acquired attention by chemists due its wide range of potential biological applications as an antioxidant, an anti-inflammatory, and an anti-carcinogenic agent. This molecule fluoresces weakly and poorly soluble in water. In this detailed study of curcumin in thirteen different solvents, both the absorption and fluorescence spectra of curcumin was found to be broad, however, a narrower and simple synchronous fluorescence spectrum of curcumin was obtained at Δ λ = 10-20 nm. Lippert-Mataga plot of curcumin in different solvents illustrated two sets of linearity which is consistent with the plot of Stokes' shift vs. the ET30. When Stokes's shift in wavenumber scale was replaced by synchronous fluorescence maximum in nanometer scale, the solvent polarity dependency measured by λSFSmax vs. Lippert-Mataga plot or ET30 values offered similar trends as measured via Stokes' shift for protic and aprotic solvents for curcumin. Better linear correlation of λSFSmax vs. π* scale of solvent polarity was found compared to λabsmax or λemmax or Stokes' shift measurements. In Stokes' shift measurement both absorption/excitation as well as emission (fluorescence) spectra are required to compute the Stokes' shift in wavenumber scale, but measurement could be done in a very fast and simple way by taking a single scan of SFS avoiding calculation and obtain information about polarity of the solvent. Curcumin decay properties in all the solvents could be fitted well to a double-exponential decay function.

  3. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  4. MATLAB Stability and Control Toolbox Trim and Static Stability Module

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis

    2012-01-01

    MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.

  5. Synchronicity in predictive modelling: a new view of data assimilation

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Tribbia, J. J.; Weiss, J. B.

    2006-11-01

    The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one representing "truth" and the other representing "model", with a unidirectional flow of information between the two. Synchronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchronization of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, including Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization approach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calculation is robust against the introduction of stochastic model error arising from unresolved scales.

  6. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    PubMed Central

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  7. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less

  8. Design of an ignition target for the laser megajoule, mitigating parametric instabilities

    NASA Astrophysics Data System (ADS)

    Laffite, S.; Loiseau, P.

    2010-10-01

    Laser plasma interaction (LPI) is a critical issue in ignition target design. Based on both scaling laws and two-dimensional calculations, this article describes how we can constrain a laser megajoule (LMJ) [J. Ebrardt and J. M. Chaput, J. Phys.: Conf. Ser. 112, 032005 (2008)] target design by mitigating LPI. An ignition indirect drive target has been designed for the 2/3 LMJ step. It requires 0.9 MJ and 260 TW of laser energy and power, to achieve a temperature of 300 eV in a rugby-shaped Hohlraum and give a yield of about 20 MJ. The study focuses on the analysis of linear gain for stimulated Raman and Brillouin scatterings. Enlarging the focal spot is an obvious way to reduce linear gains. We show that this reduction is nonlinear with the focal spot size. For relatively small focal spot area, linear gains are significantly reduced by enlarging the focal spot. However, there is no benefit in too large focal spots because of necessary larger laser entrance holes, which require more laser energy. Furthermore, this leads to the existence, for a given design, of a minimum value for linear gains for which we cannot go below.

  9. Atomic-scale friction modulated by potential corrugation in multi-layered graphene materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, Chunqiang, E-mail: chunqiang.zhuang@bjut.edu.cn; Liu, Lei

    2015-03-21

    Friction is an important issue that has to be carefully treated for the fabrication of graphene-based nano-scale devices. So far, the friction mechanism of graphene materials on the atomic scale has not yet been clearly presented. Here, first-principles calculations were employed to unveil the friction behaviors and their atomic-scale mechanism. We found that potential corrugations on sliding surfaces dominate the friction force and the friction anisotropy of graphene materials. Higher friction forces correspond to larger corrugations of potential energy, which are tuned by the number of graphene layers. The friction anisotropy is determined by the regular distributions of potential energy.more » The sliding along a fold-line path (hollow-atop-hollow) has a relatively small potential energy barrier. Thus, the linear sliding observed in macroscopic friction experiments may probably be attributed to the fold-line sliding mode on the atomic scale. These findings can also be extended to other layer-structure materials, such as molybdenum disulfide (MoS{sub 2}) and graphene-like BN sheets.« less

  10. Differential pencil beam dose computation model for photons.

    PubMed

    Mohan, R; Chui, C; Lidofsky, L

    1986-01-01

    Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density.

  11. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  12. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    PubMed

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  13. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  14. Theoretical and spectroscopic studies of a tricyclic antidepressant, imipramine hydrochloride

    NASA Astrophysics Data System (ADS)

    Sagdinc, S. G.; Azkeskin, Caner; Eşme, A.

    2018-06-01

    Imipramine hydrochloride ([H-IMI]Cl), C19H24N2.HCl, is the prototypic tricyclic antidepressant (TCA) inhibitor of norepinephrine and serotonin neuronal reuptake. The molecular structure, molecular electrostatic potential (MEP), natural bond orbital (NBO) analysis, linear and non-linear optical (NLO) properties of [H-IMI]Cl have been investigated using the density functional theory (DFT) calculations with the B3LYP level at the 6‒311++G(d,p) basis set. The UV-Vis spectra for [H-IMI]Cl were experimentally studied in water and methanol. TD‒DFT calculations in water and methanol were employed to investigate the absorption wavelengths (λ), excitation energies (E), and oscillator strengths (f) for the UV-Vis analysis and the major contributions to the electronic transitions. From NBO analysis, the orbitals with the stabilization energy E(2) of 192.15 kcal/mol are π*(C5sbnd C18) as donor NBO and π*(C19sbnd C20) as acceptor NBO. The FT‒IR (4000‒400 cm-1) and FT‒Raman (3500-50 cm-1) spectra have been measured and analyzed. The assignment of bands observed vibrational spectra have been made by comparison of its calculated theoretical vibrational frequencies obtained using the DFT/B3LYP/6‒311++G(d,p) method. The detailed vibrational assignments were performed with the DFT calculation, and the potential energy distribution (PED) of [H-IMI]Cl was obtained by the Vibrational Energy Distribution Analysis 4 (VEDA4) program. The scaled frequencies resulted in good agreement with the observed spectral patterns.

  15. Calculations of absorbed fractions in small water spheres for low-energy monoenergetic electrons and the Auger-emitting radionuclides (123)Ι and (125)Ι.

    PubMed

    Bousis, Christos; Emfietzoglou, Dimitris; Nikjoo, Hooshang

    2012-12-01

    To calculate the absorbed fraction (AF) of low energy electrons in small tissue-equivalent spherical volumes by Monte Carlo (MC) track structure simulation and assess the influence of phase (liquid water versus density-scaled water vapor) and of the continuous-slowing-down approximation (CSDA) used in semi-analytic calculations. An event-by-event MC code simulating the transport of electrons in both the vapor and liquid phase of water using appropriate electron-water interaction cross sections was used to quantify the energy deposition of low-energy electrons in spherical volumes. Semi-analytic calculations within the CSDA using a convolution integral of the Howell range-energy expressions are also presented for comparison. The AF for spherical volumes of radii from 10-1000 nm are presented for monoenergetic electrons over the energy range 100-10,000 eV and the two Auger-emitting radionuclides (125)I and (123)I. The MC calculated AF for the liquid phase are found to be smaller than those of the (density scaled) gas phase by up to 10-20% for the monoenergetic electrons and 10% for the two Auger-emitters. Differences between the liquid-phase MC results and the semi-analytic CSDA calculations are up to ∼ 55% for the monoenergetic electrons and up to ∼ 35% for the two Auger-emitters. Condensed-phase effects in the inelastic interaction of low-energy electrons with water have a noticeable but relatively small impact on the AF for the energy range and target sizes examined. Depending on the electron energies, the semi-analytic approach may lead to sizeable errors for target sizes with linear dimensions below 1 micron.

  16. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staebler, G. M.; Candy, J.; Howard, N. T.

    2016-06-15

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less

  17. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    ERIC Educational Resources Information Center

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  18. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  19. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  20. Quantifying Environmental Effects on the Decay of Hole Transfer Couplings in Biosystems.

    PubMed

    Ramos, Pablo; Pavanello, Michele

    2014-06-10

    In the past two decades, many research groups worldwide have tried to understand and categorize simple regimes in the charge transfer of such biological systems as DNA. Theoretically speaking, the lack of exact theories for electron-nuclear dynamics on one side and poor quality of the parameters needed by model Hamiltonians and nonadiabatic dynamics alike (such as couplings and site energies) on the other are the two main difficulties for an appropriate description of the charge transfer phenomena. In this work, we present an application of a previously benchmarked and linear-scaling subsystem density functional theory (DFT) method for the calculation of couplings, site energies, and superexchange decay factors (β) of several biological donor-acceptor dyads, as well as double stranded DNA oligomers composed of up to five base pairs. The calculations are all-electron and provide a clear view of the role of the environment on superexchange couplings in DNA-they follow experimental trends and confirm previous semiempirical calculations. The subsystem DFT method is proven to be an excellent tool for long-range, bridge-mediated coupling and site energy calculations of embedded molecular systems.

  1. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE PAGES

    Follett, R. K.; Edgell, D. H.; Froula, D. H.; ...

    2017-10-20

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  2. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K.; Edgell, D. H.; Froula, D. H.

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  3. Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.

  4. Two-dimensional energy spectra in a high Reynolds number turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Chandran, Dileep; Baidya, Rio; Monty, Jason; Marusic, Ivan

    2016-11-01

    The current study measures the two-dimensional (2D) spectra of streamwise velocity component (u) in a high Reynolds number turbulent boundary layer for the first time. A 2D spectra shows the contribution of streamwise (λx) and spanwise (λy) length scales to the streamwise variance at a given wall height (z). 2D spectra could be a better tool to analyse spectral scaling laws as it is devoid of energy aliasing errors that could be present in one-dimensional spectra. A novel method is used to calculate the 2D spectra from the 2D correlation of u which is obtained by measuring velocity time series at various spanwise locations using hot-wire anemometry. At low Reynolds number, the shape of the 2D spectra at a constant energy level shows λy √{ zλx } behaviour at larger scales which is in agreement with the literature. However, at high Reynolds number, it is observed that the square-root relationship gradually transforms into a linear relationship (λy λx) which could be caused by the large packets of eddies whose length grows proportionately to the growth of its width. Additionally, we will show that this linear relationship observed at high Reynolds number is consistent with attached eddy predictions. The authors gratefully acknowledge the support from the Australian Research Council.

  5. The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?

    PubMed

    Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping

    2013-01-01

    Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.

  6. Modelling climate change responses in tropical forests: similar productivity estimates across five models, but different mechanisms and responses

    NASA Astrophysics Data System (ADS)

    Rowland, L.; Harper, A.; Christoffersen, B. O.; Galbraith, D. R.; Imbuzeiro, H. M. A.; Powell, T. L.; Doughty, C.; Levine, N. M.; Malhi, Y.; Saleska, S. R.; Moorcroft, P. R.; Meir, P.; Williams, M.

    2014-11-01

    Accurately predicting the response of Amazonia to climate change is important for predicting changes across the globe. However, changes in multiple climatic factors simultaneously may result in complex non-linear responses, which are difficult to predict using vegetation models. Using leaf and canopy scale observations, this study evaluated the capability of five vegetation models (CLM3.5, ED2, JULES, SiB3, and SPA) to simulate the responses of canopy and leaf scale productivity to changes in temperature and drought in an Amazonian forest. The models did not agree as to whether gross primary productivity (GPP) was more sensitive to changes in temperature or precipitation. There was greater model-data consistency in the response of net ecosystem exchange to changes in temperature, than in the response to temperature of leaf area index (LAI), net photosynthesis (An) and stomatal conductance (gs). Modelled canopy scale fluxes are calculated by scaling leaf scale fluxes to LAI, and therefore in this study similarities in modelled ecosystem scale responses to drought and temperature were the result of inconsistent leaf scale and LAI responses among models. Across the models, the response of An to temperature was more closely linked to stomatal behaviour than biochemical processes. Consequently all the models predicted that GPP would be higher if tropical forests were 5 °C colder, closer to the model optima for gs. There was however no model consistency in the response of the An-gs relationship when temperature changes and drought were introduced simultaneously. The inconsistencies in the An-gs relationships amongst models were caused by to non-linear model responses induced by simultaneous drought and temperature change. To improve the reliability of simulations of the response of Amazonian rainforest to climate change the mechanistic underpinnings of vegetation models need more complete validation to improve accuracy and consistency in the scaling of processes from leaf to canopy.

  7. Linear and nonlinear response in sheared soft spheres

    NASA Astrophysics Data System (ADS)

    Tighe, Brian

    2013-11-01

    Packings of soft spheres provide an idealized model of foams, emulsions, and grains, while also serving as the canonical example of a system undergoing a jamming transition. Packings' mechanical response has now been studied exhaustively in the context of ``strict linear response,'' i.e. by linearizing about a stable static packing and solving the resulting equations of motion. Both because the system is close to a critical point and because the soft sphere pair potential is non-analytic at the point of contact, it is reasonable to ask under what circumstances strict linear response provides a good approximation to the actual response. We simulate sheared soft sphere packings close to jamming and identify two distinct strain scales: (i) the scale on which strict linear response fails, coinciding with a topological change in the packing's contact network; and (ii) the scale on which linear superposition of the averaged stress-strain curve breaks down. This latter scale provides a ``weak linear response'' criterion and is likely to be more experimentally relevant.

  8. Langmuir wave damping decreases slowly

    NASA Astrophysics Data System (ADS)

    Rose, Harvey

    2006-10-01

    The onset of stimulated Raman scatter in a single laser speckle occurs (D. S. Montgomery et al., Phys. Plasmas, 9, 2311 (2002)) at lower laser intensity, I, than predicted by linear theory based on classical Landau damping, νL, of the SRS daughter Langmuir wave. Does this imply that SRS onset in a speckled laser beam, propagating through long scale length plasma, is also at odds with linear theory? It has been shown (Harvey A. Rose and D. F. DuBois, Phys. Rev. Lett. 72, 2883 (1994)) that linear convective gain in speckles with large fluctuations of I about the average, , leads to onset at a value of , Ic, small compared to that for onset in a uniform beam. While nonlinear electron trapping effects may occur in very intense speckles, whether or not these effects are sufficient to lower the onset value of below Ic depends on how strongly electrons must be trapped before there is significant reduction in νL. As the amplitude of an SRS daughter Langmuir wave increases, its νL decreases by the factor ν/φb, due to the competition between electron trapping, with electron bounce frequency, φb, and escape of these trapped electrons by advection out of a speckle's side, at rate ν. This result (Harvey A. Rose and David A. Russell, Phys. Plasmas, 8, 4784 (2001)) is valid for ν/φb 1. In this talk I present a nonlinear, transit time damping, calculation of νL and find that reduction by a factor of two does not occur until φb/ν 5. This slow turn on of trapping effects suggests that the linear calculation of Ic is NIF relevant.

  9. Design of pressure-sensing diaphragm for MEMS capacitance diaphragm gauge considering size effect

    NASA Astrophysics Data System (ADS)

    Li, Gang; Li, Detian; Cheng, Yongjun; Sun, Wenjun; Han, Xiaodong; Wang, Chengxiang

    2018-03-01

    MEMS capacitance diaphragm gauge with a full range of (1˜1000) Pa is considered for its wide application prospect. The design of pressure-sensing diaphragm is the key to achieve balanced performance for this kind of gauges. The optimization process of the pressure-sensing diaphragm with island design of a capacitance diaphragm gauge based on MEMS technique has been reported in this work. For micro-components in micro scale range, mechanical properties are very different from that in the macro scale range, so the size effect should not be ignored. The modified strain gradient elasticity theory considering size effect has been applied to determine the bending rigidity of the pressure-sensing diaphragm, which is then used in the numerical model to calculate the deflection-pressure relation of the diaphragm. According to the deflection curves, capacitance variation can be determined by integrating over the radius of the diaphragm. At last, the design of the diaphragm has been optimized based on three parameters: sensitivity, linearity and ground capacitance. With this design, a full range of (1˜1000) Pa can be achieved, meanwhile, balanced sensitivity, resolution and linearity can be kept.

  10. Exchange Energy Density Functionals that reproduce the Linear Response Function of the Free Electron Gas

    NASA Astrophysics Data System (ADS)

    García-Aldea, David; Alvarellos, J. E.

    2009-03-01

    We present several nonlocal exchange energy density functionals that reproduce the linear response function of the free electron gas. These nonlocal functionals are constructed following a similar procedure used previously for nonlocal kinetic energy density functionals by Chac'on-Alvarellos-Tarazona, Garc'ia-Gonz'alez et al., Wang-Govind-Carter and Garc'ia-Aldea-Alvarellos. The exchange response function is not known but we have used the approximate response function developed by Utsumi and Ichimaru, even we must remark that the same ansatz can be used to reproduce any other response function with the same scaling properties. We have developed two families of new nonlocal functionals: one is constructed with a mathematical structure based on the LDA approximation -- the Dirac functional for the exchange - and for the second one the structure of the second order gradient expansion approximation is took as a model. The functionals are constructed is such a way that they can be used in localized systems (using real space calculations) and in extended systems (using the momentum space, and achieving a quasilinear scaling with the system size if a constant reference electron density is defined).

  11. Insights into the regioselectivity and RNA-binding affinity of HIV-1 nucleocapsid protein from linear-scaling quantum methods.

    PubMed

    Khandogin, Jana; Musier-Forsyth, Karin; York, Darrin M

    2003-07-25

    Human immunodeficiency virus type 1 (HIV-1) nucleocapsid protein (NC) plays several important roles in the viral life-cycle and presents an attractive target for rational drug design. Here, the macromolecular reactivity of NC and its binding to RNA is characterized through determination of electrostatic and chemical descriptors derived from linear-scaling quantum calculations in solution. The computational results offer a rationale for the experimentally observed susceptibility of the Cys49 thiolate toward small-molecule electrophilic agents, and support the recently proposed stepwise protonation mechanism of the C-terminal Zn-coordination complex. The distinctive binding mode of NC to SL2 and SL3 stem-loops of the HIV-1 genomic RNA packaging signal is studied on the basis of protein side-chain contributions to the electrostatic binding energies. These results indicate the importance of several basic residues in the 3(10) helical region and the N-terminal zinc finger, and rationalize the presence of several evolutionarily conserved residues in NC. The combined reactivity and RNA-binding study provides new insights that may contribute toward the structure-based design of anti-HIV therapies.

  12. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  13. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  14. Comparison of osmolality and refractometric readings of Hispaniolan Amazon parrot (Amazona ventralis) urine.

    PubMed

    Brock, A Paige; Grunkemeyer, Vanessa L; Fry, Michael M; Hall, James S; Bartges, Joseph W

    2013-12-01

    To evaluate the relationship between osmolality and specific gravity of urine samples from clinically normal adult parrots and to determine a formula to convert urine specific gravity (USG) measured on a reference scale to a more accurate USG value for an avian species, urine samples were collected opportunistically from a colony of Hispaniolan Amazon parrots (Amazona ventralis). Samples were analyzed by using a veterinary refractometer, and specific gravity was measured on both canine and feline scales. Osmolality was measured by vapor pressure osmometry. Specific gravity and osmolality measurements were highly correlated (r = 0.96). The linear relationship between refractivity measurements on a reference scale and osmolality was determined. An equation was calculated to allow specific gravity results from a medical refractometer to be converted to specific gravity values of Hispaniolan Amazon parrots: USGHAp = 0.201 +0.798(USGref). Use of the reference-canine scale to approximate the osmolality of parrot urine leads to an overestimation of the true osmolality of the sample. In addition, this error increases as the concentration of urine increases. Compared with the human-canine scale, the feline scale provides a closer approximation to urine osmolality of Hispaniolan Amazon parrots but still results in overestimation of osmolality.

  15. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE PAGES

    Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...

    2016-06-29

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less

  16. dc3dm: Software to efficiently form and apply a 3D DDM operator for a nonuniformly discretized rectangular planar fault

    NASA Astrophysics Data System (ADS)

    Bradley, A. M.

    2013-12-01

    My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element larger than itself. A mesh generated by a family of quadtrees has this property. Using multiple quadtrees that collectively cover the domain enables the elements to have a small aspect ratio. Mathematically, IGA works as follows. Let Mn be the nonuniform mesh. Define Mu to be the uniform mesh that is composed of the smallest element in Mn. Every element e in Mu has associated subelements in Mn that tile e. First, a linear operator Inu mapping data on Mn to Mu implements smooth (C^1) interpolation; I use cubic (Clough-Tocher) interpolation over a triangulation induced by Mn. Second, a linear operator Gu implements DDMu on Mu. Third, a linear operator Aun maps data on Mu to Mn. These three linear operators implement exact IGA (EIGA): Gn = Aun Gu Inu. Computationally, there are several more details. EIGA has the undesirable property that calculating one entry of Gn for receiver ri requires calculating multiple entries of Gu, no matter how far away from ri the smallest element is. Approximate IGA (AIGA) solves this problem by restricting EIGA to a neighborhood around each receiver. Associated with each neighborhood is a minimum element size s^i that indexes a family of operators Gu^i. The order of accuracy of AIGA is the same as that of EIGA and DDMu if each neighborhood is kept constant in spatial extent as the mesh is refined.

  17. Comparison of Two Multifractal Analysis Methods: Generalized Structure Function and Multifractal Spectrum

    NASA Astrophysics Data System (ADS)

    Morato, M. Carmen; Castellanos, M. Teresa; Bird, Nigel; Tarquis, Ana M.

    2016-04-01

    Soil variability has often been a constant expected factor to take in account in soil studies. This variability could be considered to be composed of "functional" variations plus random fluctuations or noise. Multifractal formalism, first proposed by Mandelbrot (1982), is suitable for variables with self-similar distribution on a spatial domain. Multifractal analysis can provide insight into spatial variability of crop or soil parameters. In soil science, it has been quite popular to characterize the scaling property of a variable measured along a transect as a mass distribution of a statistical measure on a length domain of the studied transect. To do this, it divides it into a number of self similar segments and estimate the partition function and mass function. Based on this, the multifractal spectra (MFS) is calculated. However, another technique can be applied focus its attention in the variations of a measure analyzing the moments of the absolute differences at different scales, the Generalized Structure Function (GSF), and extracting the Generalized Hurst exponents. The aim of this study is to compare both techniques in a transect data. A common 1024 m transect across arable fields at Silsoe in Bedfordshire, east-central England were analyzed with these two multifractal methods. Properties studied were total porosity (Porosity), gravimetric water content (GWC) and nitrogen oxide flux (NO2 flux). The results showed in both methods that NO2 flux presents a clear multifractal character and a weak one in the GWC and Porosity cases. Several parameters were calculated from both methods and are discussed. On the other hand, using the partition function all the scale ranges were used, meanwhile in the GSF a shorter range of scales showed linear behavior in the bilog plots used to estimate the parameters. GWC exhibits a linear pattern from increments of 4 till 256 meters, Porosity showed this behavior from 4 till 64 meters. In case of NO2 flux only from 32 to 256 meters showed it. However, the relation between the mass exponent function and the GSF, found in the literature, was positively verified in the three variables.

  18. Modeling Hohlraum-Based Laser Plasma Instability Experiments

    NASA Astrophysics Data System (ADS)

    Meezan, N. B.

    2005-10-01

    Laser fusion targets must control laser-plasma instabilities (LPI) in order to perform as designed. We present analyses of recent hohlraum LPI experiments from the Omega laser facility. The targets, gold hohlraums filled with gas or SiO2 foam, are preheated by several 3φ beams before an interaction beam (2φ or 3φ) is fired along the hohlraum axis. The experiments are simulated in 2-D and 3-D using the code hydra. The choice of electron thermal conduction model in hydra strongly affects the simulated plasma conditions. This work is part of a larger effort to systematically explore the usefulness of linear gain as a design tool for fusion targets. We find that the measured Raman and Brillouin backscatter scale monotonically with the peak linear gain calculated for the target; however, linear gain is not sufficient to explain all trends in the data. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48.

  19. The anharmonic quartic force field infrared spectra of five non-linear polycyclic aromatic hydrocarbons: Benz[a]anthracene, chrysene, phenanthrene, pyrene, and triphenylene

    NASA Astrophysics Data System (ADS)

    Mackie, Cameron J.; Candian, Alessandra; Huang, Xinchuan; Maltseva, Elena; Petrignani, Annemieke; Oomens, Jos; Mattioda, Andrew L.; Buma, Wybren Jan; Lee, Timothy J.; Tielens, Alexander G. G. M.

    2016-08-01

    The study of interstellar polycyclic aromatic hydrocarbons (PAHs) relies heavily on theoretically predicted infrared spectra. Most earlier studies use scaled harmonic frequencies for band positions and the double harmonic approximation for intensities. However, recent high-resolution gas-phase experimental spectroscopic studies have shown that the harmonic approximation is not sufficient to reproduce experimental results. In our previous work, we presented the anharmonic theoretical spectra of three linear PAHs, showing the importance of including anharmonicities into the theoretical calculations. In this paper, we continue this work by extending the study to include five non-linear PAHs (benz[a]anthracene, chrysene, phenanthrene, pyrene, and triphenylene), thereby allowing us to make a full assessment of how edge structure, symmetry, and size influence the effects of anharmonicities. The theoretical anharmonic spectra are compared to spectra obtained under matrix isolation low-temperature conditions, low-resolution, high-temperature gas-phase conditions, and high-resolution, low-temperature gas-phase conditions. Overall, excellent agreement is observed between the theoretical and experimental spectra although the experimental spectra show subtle but significant differences.

  20. The anharmonic quartic force field infrared spectra of five non-linear polycyclic aromatic hydrocarbons: Benz[a]anthracene, chrysene, phenanthrene, pyrene, and triphenylene.

    PubMed

    Mackie, Cameron J; Candian, Alessandra; Huang, Xinchuan; Maltseva, Elena; Petrignani, Annemieke; Oomens, Jos; Mattioda, Andrew L; Buma, Wybren Jan; Lee, Timothy J; Tielens, Alexander G G M

    2016-08-28

    The study of interstellar polycyclic aromatic hydrocarbons (PAHs) relies heavily on theoretically predicted infrared spectra. Most earlier studies use scaled harmonic frequencies for band positions and the double harmonic approximation for intensities. However, recent high-resolution gas-phase experimental spectroscopic studies have shown that the harmonic approximation is not sufficient to reproduce experimental results. In our previous work, we presented the anharmonic theoretical spectra of three linear PAHs, showing the importance of including anharmonicities into the theoretical calculations. In this paper, we continue this work by extending the study to include five non-linear PAHs (benz[a]anthracene, chrysene, phenanthrene, pyrene, and triphenylene), thereby allowing us to make a full assessment of how edge structure, symmetry, and size influence the effects of anharmonicities. The theoretical anharmonic spectra are compared to spectra obtained under matrix isolation low-temperature conditions, low-resolution, high-temperature gas-phase conditions, and high-resolution, low-temperature gas-phase conditions. Overall, excellent agreement is observed between the theoretical and experimental spectra although the experimental spectra show subtle but significant differences.

  1. Non-linear dielectric signatures of entropy changes in liquids subject to time dependent electric fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richert, Ranko

    2016-03-21

    A model of non-linear dielectric polarization is studied in which the field induced entropy change is the source of polarization dependent retardation time constants. Numerical solutions for the susceptibilities of the system are obtained for parameters that represent the dynamic and thermodynamic behavior of glycerol. The calculations for high amplitude sinusoidal fields show a significant enhancement of the steady state loss for frequencies below that of the low field loss peak. Also at relatively low frequencies, the third harmonic susceptibility spectrum shows a “hump,” i.e., a maximum, with an amplitude that increases with decreasing temperature. Both of these non-linear effectsmore » are consistent with experimental evidence. While such features have been used to conclude on a temperature dependent number of dynamically correlated particles, N{sub corr}, the present result demonstrates that the third harmonic susceptibility display a peak with an amplitude that tracks the variation of the activation energy in a model that does not involve dynamical correlations or spatial scales.« less

  2. Communication: modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  3. On the self-organizing process of large scale shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, Andrew P. L.; Kim, Eun-jin; Liu, Han-Li

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2Dmore » hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.« less

  4. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    PubMed Central

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776

  5. Flexoelectricity in Carbon Nanostructures: Nanotubes, Fullerenes, and Nanocones.

    PubMed

    Kvashnin, Alexander G; Sorokin, Pavel B; Yakobson, Boris I

    2015-07-16

    We report theoretical analysis of the electronic flexoelectric effect associated with nanostructures of sp(2) carbon (curved graphene). Through the density functional theory calculations, we establish the universality of the linear dependence of flexoelectric atomic dipole moments on local curvature in various carbon networks (carbon nanotubes, fullerenes with high and low symmetry, and nanocones). The usefulness of such dependence is in the possibility to extend the analysis of any carbon systems with local deformations with respect to their electronic properties. This result is exemplified by exploring of flexoelectric effect in carbon nanocones that display large dipole moment, cumulative over their surface yet surprisingly scaling exactly linearly with the length, and with sine-law dependence on the apex angle, dflex ~ L sin(α). Our study points out the opportunity of predicting the electric dipole moment distribution on complex graphene-based nanostructures based only on the local curvature information.

  6. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  7. On proper linearization, construction and analysis of the Boyle-van't Hoff plots and correct calculation of the osmotically inactive volume.

    PubMed

    Katkov, Igor I

    2011-06-01

    The Boyle-van't Hoff (BVH) law of physics has been widely used in cryobiology for calculation of the key osmotic parameters of cells and optimization of cryo-protocols. The proper use of linearization of the Boyle-vant'Hoff relationship for the osmotically inactive volume (v(b)) has been discussed in a rigorous way in (Katkov, Cryobiology, 2008, 57:142-149). Nevertheless, scientists in the field have been continuing to use inappropriate methods of linearization (and curve fitting) of the BVH data, plotting the BVH line and calculation of v(b). Here, we discuss the sources of incorrect linearization of the BVH relationship using concrete examples of recent publications, analyze the properties of the correct BVH line (which is unique for a given v(b)), provide appropriate statistical formulas for calculation of v(b) from the experimental data, and propose simplistic instructions (standard operation procedure, SOP) for proper normalization of the data, appropriate linearization and construction of the BVH plots, and correct calculation of v(b). The possible sources of non-linear behavior or poor fit of the data to the proper BVH line such as active water and/or solute transports, which can result in large discrepancy between the hyperosmotic and hypoosmotic parts of the BVH plot, are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    NASA Astrophysics Data System (ADS)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  9. Stability investigations of relaxing molecular gas flows. Results and perspectives

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yurii N.; Ershov, Igor V.

    2017-10-01

    This article presents results of systematic investigations of a dissipative effect which manifests itself as the growth of hydrodynamic stability and suppression of turbulence in relaxing molecular gas flows. The effect can be a new way for control stability and laminar turbulent transition in aerodynamic flows. The consideration of suppression of inviscid acoustic waves in 2D shear flows is presented. Nonlinear evolution of large-scale vortices and Kelvin — Helmholtz waves in relaxing shear flows are studied. Critical Reynolds numbers in supersonic Couette flows are calculated analytically and numerically within the framework of both classical linear and nonlinear energy hydrodynamic stability theories. The calculations clearly show that the relaxation process can appreciably delay the laminar-turbulent transition. The aim of this article is to show the new dissipative effect, which can be used for flow control and laminarization.

  10. Transformer ratio saturation in a beam-driven wakefield accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J. P.; Martorelli, R.; Pukhov, A.

    We show that for beam-driven wakefield acceleration, the linearly ramped, equally spaced train of bunches typically considered to optimise the transformer ratio only works for flat-top bunches. Through theory and simulation, we explain that this behaviour is due to the unique properties of the plasma response to a flat-top density profile. Calculations of the optimal scaling for a train of Gaussian bunches show diminishing returns with increasing bunch number, tending towards saturation. For a periodic bunch train, a transformer ratio of 23 was achieved for 50 bunches, rising to 40 for a fully optimised beam.

  11. An experimental investigation of free-tip response to a jet

    NASA Technical Reports Server (NTRS)

    Young, L. A.

    1986-01-01

    The aerodynamic response of passively oscillating tips appended to a model helicopter rotor was investigated during a whirl test. Tip responsiveness was found to meet free-tip rotor requirements. Experimental and analytical estimates of the free-tip aerodynamic spring, mechanical spring, and aerodynamic damping were calculated and compared. The free tips were analytically demonstrated to be operating outside the tip resonant response region at full-scale tip speeds. Further, tip resonance was shown to be independent of tip speed, given the assumption that the tip forcing frequency is linearly dependent upon the rotor rotational speed.

  12. Long-wavelength microinstabilities in toroidal plasmas*

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Rewoldt, G.

    1993-07-01

    Realistic kinetic toroidal eigenmode calculations have been carried out to support a proper assessment of the influence of long-wavelength microturbulence on transport in tokamak plasmas. In order to efficiently evaluate large-scale kinetic behavior extending over many rational surfaces, significant improvements have been made to a toroidal finite element code used to analyze the fully two-dimensional (r,θ) mode structures of trapped-ion and toroidal ion temperature gradient (ITG) instabilities. It is found that even at very long wavelengths, these eigenmodes exhibit a strong ballooning character with the associated radial structure relatively insensitive to ion Landau damping at the rational surfaces. In contrast to the long-accepted picture that the radial extent of trapped-ion instabilities is characterized by the ion-gyroradius-scale associated with strong localization between adjacent rational surfaces, present results demonstrate that under realistic conditions, the actual scale is governed by the large-scale variations in the equilibrium gradients. Applications to recent measurements of fluctuation properties in Tokamak Fusion Test Reactor (TFTR) [Plasma Phys. Controlled Nucl. Fusion Res. (International Atomic Energy Agency, Vienna, 1985), Vol. 1, p. 29] L-mode plasmas indicate that the theoretical trends appear consistent with spectral characteristics as well as rough heuristic estimates of the transport level. Benchmarking calculations in support of the development of a three-dimensional toroidal gyrokinetic code indicate reasonable agreement with respect to both the properties of the eigenfunctions and the magnitude of the eigenvalues during the linear phase of the simulations of toroidal ITG instabilities.

  13. Three-dimensional Nonlinear Calculation of the 2017 North Korean Nuclear Test

    NASA Astrophysics Data System (ADS)

    Stevens, J. L.; O'Brien, M.

    2017-12-01

    We perform a three-dimensional nonlinear calculation of the 2017 North Korean Nuclear Test including the topography of the test site. Surface waves from all six DPRK nuclear tests are remarkably similar. Linear scaling of surface wave amplitudes from an estimated yield of 4.6 kt for the 2009 event (Murphy et al, 2013) gives an estimated yield of 180 kt for the 2017 event, which is the yield used in the calculation. The depth of the calculated explosion is 730 meters below the surface and close to the peak of Mt. Mantap. Calculated surface displacements are as large as 4 meters vertical and 2 meters horizontal, but there is a node in both with minimal vertical and horizontal displacements close to the mountain peak. Earlier calculations of a 12.5 kiloton explosion at depths of 100-800 meters show a peak in surface wave amplitudes for explosions at the base of the mountain relative to both deeper and shallower sources, so the North Korean explosions have been at optimal depth for surface wave generation. This combined with tectonic stress state and a low surface wave amplitude bias at other test sites may explain the large surface wave anomaly at this test site. Cracking and nonlinear deformation are much more extensive for the 180 kt calculation than in the earlier 12.5 kiloton calculations.

  14. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.

  15. Driven waves in a two-fluid plasma

    NASA Astrophysics Data System (ADS)

    Roberge, W. G.; Ciolek, Glenn E.

    2007-12-01

    We study the physics of wave propagation in a weakly ionized plasma, as it applies to the formation of multifluid, magnetohydrodynamics (MHD) shock waves. We model the plasma as separate charged and neutral fluids which are coupled by ion-neutral friction. At times much less than the ion-neutral drag time, the fluids are decoupled and so evolve independently. At later times, the evolution is determined by the large inertial mismatch between the charged and neutral particles. The neutral flow continues to evolve independently; the charged flow is driven by and slaved to the neutral flow by friction. We calculate this driven flow analytically by considering the special but realistic case where the charged fluid obeys linearized equations of motion. We carry out an extensive analysis of linear, driven, MHD waves. The physics of driven MHD waves is embodied in certain Green functions which describe wave propagation on short time-scales, ambipolar diffusion on long time-scales and transitional behaviour at intermediate times. By way of illustration, we give an approximate solution for the formation of a multifluid shock during the collision of two identical interstellar clouds. The collision produces forward and reverse J shocks in the neutral fluid and a transient in the charged fluid. The latter rapidly evolves into a pair of magnetic precursors on the J shocks, wherein the ions undergo force-free motion and the magnetic field grows monotonically with time. The flow appears to be self-similar at the time when linear analysis ceases to be valid.

  16. Experimental and theoretical study of p-nitroacetanilide.

    PubMed

    Gnanasambandan, T; Gunasekaran, S; Seshadri, S

    2014-01-03

    The spectroscopic properties of the p-nitroacetanilide (PNA) were examined by FT-IR, FT-Raman and UV-Vis techniques. FT-IR and FT-Raman spectra in solid state were observed in the region 4000-400 cm(-1) and 3500-100 cm(-1), respectively. The UV-Vis absorption spectrum of the compound that dissolved in ethanol was recorded in the range of 200-400 nm. The structural and spectroscopic data of the molecule in the ground state were calculated by using density functional theory (DFT) employing B3LYP methods with the 6-31G(d,p) and 6-311+G(d,p) basis sets. The geometry of the molecule was fully optimized, vibrational spectra were calculated and fundamental vibrations were assigned on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. Thermodynamic properties like entropy, heat capacity and enthalpy have been calculated for the molecule. HOMO-LUMO energy gap has been calculated. The intramolecular contacts have been interpreted using natural bond orbital (NBO) and natural localized molecular orbital (NLMO) analysis. Important non-linear optical (NLO) properties such as electric dipole moment and first hyperpolarizability have been computed using B3LYP quantum chemical calculation. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  18. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent Surface Lambertian-Equivalent Reflectivity Calculations

    NASA Technical Reports Server (NTRS)

    Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert

    2017-01-01

    Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.

  19. Synchronous fluorescence spectroscopic study of solvatochromic curcumin dye.

    PubMed

    Patra, Digambara; Barakat, Christelle

    2011-09-01

    Curcumin, the main yellow bioactive component of turmeric, has recently acquired attention by chemists due its wide range of potential biological applications as an antioxidant, an anti-inflammatory, and an anti-carcinogenic agent. This molecule fluoresces weakly and poorly soluble in water. In this detailed study of curcumin in thirteen different solvents, both the absorption and fluorescence spectra of curcumin was found to be broad, however, a narrower and simple synchronous fluorescence spectrum of curcumin was obtained at Δλ=10-20 nm. Lippert-Mataga plot of curcumin in different solvents illustrated two sets of linearity which is consistent with the plot of Stokes' shift vs. the ET30. When Stokes's shift in wavenumber scale was replaced by synchronous fluorescence maximum in nanometer scale, the solvent polarity dependency measured by λSFSmax vs. Lippert-Mataga plot or ET30 values offered similar trends as measured via Stokes' shift for protic and aprotic solvents for curcumin. Better linear correlation of λSFSmax vs. π* scale of solvent polarity was found compared to λabsmax or λemmax or Stokes' shift measurements. In Stokes' shift measurement both absorption/excitation as well as emission (fluorescence) spectra are required to compute the Stokes' shift in wavenumber scale, but measurement could be done in a very fast and simple way by taking a single scan of SFS avoiding calculation and obtain information about polarity of the solvent. Curcumin decay properties in all the solvents could be fitted well to a double-exponential decay function. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Internal tide generation by abyssal hills using analytical theory

    NASA Astrophysics Data System (ADS)

    Melet, Angélique; Nikurashin, Maxim; Muller, Caroline; Falahat, S.; Nycander, Jonas; Timko, Patrick G.; Arbic, Brian K.; Goff, John A.

    2013-11-01

    Internal tide driven mixing plays a key role in sustaining the deep ocean stratification and meridional overturning circulation. Internal tides can be generated by topographic horizontal scales ranging from hundreds of meters to tens of kilometers. State of the art topographic products barely resolve scales smaller than ˜10 km in the deep ocean. On these scales abyssal hills dominate ocean floor roughness. The impact of abyssal hill roughness on internal-tide generation is evaluated in this study. The conversion of M2 barotropic to baroclinic tidal energy is calculated based on linear wave theory both in real and spectral space using the Shuttle Radar Topography Mission SRTM30_PLUS bathymetric product at 1/120° resolution with and without the addition of synthetic abyssal hill roughness. Internal tide generation by abyssal hills integrates to 0.1 TW globally or 0.03 TW when the energy flux is empirically corrected for supercritical slope (i.e., ˜10% of the energy flux due to larger topographic scales resolved in standard products in both cases). The abyssal hill driven energy conversion is dominated by mid-ocean ridges, where abyssal hill roughness is large. Focusing on two regions located over the Mid-Atlantic Ridge and the East Pacific Rise, it is shown that regionally linear theory predicts an increase of the energy flux due to abyssal hills of up to 100% or 60% when an empirical correction for supercritical slopes is attempted. Therefore, abyssal hills, unresolved in state of the art topographic products, can have a strong impact on internal tide generation, especially over mid-ocean ridges.

  1. Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan

    2015-04-01

    An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs

  2. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  3. Dual tree fractional quaternion wavelet transform for disparity estimation.

    PubMed

    Kumar, Sanoj; Kumar, Sanjeev; Sukavanam, Nagarajan; Raman, Balasubramanian

    2014-03-01

    This paper proposes a novel phase based approach for computing disparity as the optical flow from the given pair of consecutive images. A new dual tree fractional quaternion wavelet transform (FrQWT) is proposed by defining the 2D Fourier spectrum upto a single quadrant. In the proposed FrQWT, each quaternion wavelet consists of a real part (a real DWT wavelet) and three imaginary parts that are organized according to the quaternion algebra. First two FrQWT phases encode the shifts of image features in the absolute horizontal and vertical coordinate system, while the third phase has the texture information. The FrQWT allowed a multi-scale framework for calculating and adjusting local disparities and executing phase unwrapping from coarse to fine scales with linear computational efficiency. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Optimal output fast feedback in two-time scale control of flexible arms

    NASA Technical Reports Server (NTRS)

    Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.

    1986-01-01

    Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.

  5. Pattern formation in individual-based systems with time-varying parameters

    NASA Astrophysics Data System (ADS)

    Ashcroft, Peter; Galla, Tobias

    2013-12-01

    We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.

  6. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.

  7. Communication: A reduced scaling J-engine based reformulation of SOS-MP2 using graphics processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, S. A.; Kussmann, J.; Ochsenfeld, C., E-mail: Christian.Ochsenfeld@cup.uni-muenchen.de

    2014-08-07

    We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows tomore » replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.« less

  8. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  9. Towards a robust framework for Probabilistic Tsunami Hazard Assessment (PTHA) for local and regional tsunami in New Zealand

    NASA Astrophysics Data System (ADS)

    Mueller, Christof; Power, William; Fraser, Stuart; Wang, Xiaoming

    2013-04-01

    Probabilistic Tsunami Hazard Assessment (PTHA) is conceptually closely related to Probabilistic Seismic Hazard Assessment (PSHA). The main difference is that PTHA needs to simulate propagation of tsunami waves through the ocean and cannot rely on attenuation relationships, which makes PTHA computationally more expensive. The wave propagation process can be assumed to be linear as long as water depth is much larger than the wave amplitude of the tsunami. Beyond this limit a non-linear scheme has to be employed with significantly higher algorithmic run times. PTHA considering far-field tsunami sources typically uses unit source simulations, and relies on the linearity of the process by later scaling and combining the wave fields of individual simulations to represent the intended earthquake magnitude and rupture area. Probabilistic assessments are typically made for locations offshore but close to the coast. Inundation is calculated only for significantly contributing events (de-aggregation). For local and regional tsunami it has been demonstrated that earthquake rupture complexity has a significant effect on the tsunami amplitude distribution offshore and also on inundation. In this case PTHA has to take variable slip distributions and non-linearity into account. A unit source approach cannot easily be applied. Rupture complexity is seen as an aleatory uncertainty and can be incorporated directly into the rate calculation. We have developed a framework that manages the large number of simulations required for local PTHA. As an initial case study the effect of rupture complexity on tsunami inundation and the statistics of the distribution of wave heights have been investigated for plate-interface earthquakes in the Hawke's Bay region in New Zealand. Assessing the probability that water levels will be in excess of a certain threshold requires the calculation of empirical cumulative distribution functions (ECDF). We compare our results with traditional estimates for tsunami inundation simulations that do not consider rupture complexity. De-aggregation based on moment magnitude alone might not be appropriate, because the hazard posed by any individual event can be underestimated locally if rupture complexity is ignored.

  10. Composition dependent band offsets of ZnO and its ternary alloys

    NASA Astrophysics Data System (ADS)

    Yin, Haitao; Chen, Junli; Wang, Yin; Wang, Jian; Guo, Hong

    2017-01-01

    We report the calculated fundamental band gaps of wurtzite ternary alloys Zn1-xMxO (M = Mg, Cd) and the band offsets of the ZnO/Zn1-xMxO heterojunctions, these II-VI materials are important for electronics and optoelectronics. Our calculation is based on density functional theory within the linear muffin-tin orbital (LMTO) approach where the modified Becke-Johnson (MBJ) semi-local exchange is used to accurately produce the band gaps, and the coherent potential approximation (CPA) is applied to deal with configurational average for the ternary alloys. The combined LMTO-MBJ-CPA approach allows one to simultaneously determine both the conduction band and valence band offsets of the heterojunctions. The calculated band gap data of the ZnO alloys scale as Eg = 3.35 + 2.33x and Eg = 3.36 - 2.33x + 1.77x2 for Zn1-xMgxO and Zn1-xCdxO, respectively, where x being the impurity concentration. These scaling as well as the composition dependent band offsets are quantitatively compared to the available experimental data. The capability of predicting the band parameters and band alignments of ZnO and its ternary alloys with the LMTO-CPA-MBJ approach indicate the promising application of this method in the design of emerging electronics and optoelectronics.

  11. Weak gravitational lensing effects on the determination of Omega_mega_m and Omega_mega Lambda from SNeIa

    NASA Astrophysics Data System (ADS)

    Valageas, P.

    2000-02-01

    In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.

  12. Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan

    2018-04-01

    This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.

  13. Linear arrangements of nano-scale ferromagnetic particles spontaneously formed in a copper-base Cu-Ni-Co alloy

    NASA Astrophysics Data System (ADS)

    Sakakura, Hibiki; Kim, Jun-Seop; Takeda, Mahoto

    2018-03-01

    We have investigated the influence of magnetic interactions on the microstructural evolution of nano-scale granular precipitates formed spontaneously in an annealed Cu-20at%Ni-5at%Co alloy and the associated changes of magnetic properties. The techniques used included transmission electron microscopy, superconducting quantum interference device (SQUID) magnetometry, magneto-thermogravimetry (MTG), and first-principles calculations based on the method of Koster-Korringa-Rostker with the coherent potential approximation. Our work has revealed that the nano-scale spherical and cubic precipitates which formed on annealing at 873 K and 973 K comprise mainly cobalt and nickel with a small amount of copper, and are arranged in the 〈1 0 0〉 direction of the copper matrix. The SQUID and MTG measurements suggest that magnetic properties such as coercivity and Curie temperature are closely correlated with the microstructure. The combination of results suggests that magnetic interactions between precipitates during annealing can explain consistently the observed precipitation phenomena.

  14. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  15. Calculation of biochemical net reactions and pathways by using matrix operations.

    PubMed Central

    Alberty, R A

    1996-01-01

    Pathways for net biochemical reactions can be calculated by using a computer program that solves systems of linear equations. The coefficients in the linear equations are the stoichiometric numbers in the biochemical equations for the system. The solution of the system of linear equations is a vector of the stoichiometric numbers of the reactions in the pathway for the net reaction; this is referred to as the pathway vector. The pathway vector gives the number of times the various reactions have to occur to produce the desired net reaction. Net reactions may involve unknown numbers of ATP, ADP, and Pi molecules. The numbers of ATP, ADP, and Pi in a desired net reaction can be calculated in a two-step process. In the first step, the pathway is calculated by solving the system of linear equations for an abbreviated stoichiometric number matrix without ATP, ADP, Pi, NADred, and NADox. In the second step, the stoichiometric numbers in the desired net reaction, which includes ATP, ADP, Pi, NADred, and NADox, are obtained by multiplying the full stoichiometric number matrix by the calculated pathway vector. PMID:8804633

  16. Thermodynamic scaling of dynamic properties of liquid crystals: Verifying the scaling parameters using a molecular model

    NASA Astrophysics Data System (ADS)

    Satoh, Katsuhiko

    2013-08-01

    The thermodynamic scaling of molecular dynamic properties of rotation and thermodynamic parameters in a nematic phase was investigated by a molecular dynamic simulation using the Gay-Berne potential. A master curve for the relaxation time of flip-flop motion was obtained using thermodynamic scaling, and the dynamic property could be solely expressed as a function of TV^{γ _τ }, where T and V are the temperature and volume, respectively. The scaling parameter γτ was in excellent agreement with the thermodynamic parameter Γ, which is the logarithm of the slope of a line plotted for the temperature and volume at constant P2. This line was fairly linear, and as good as the line for p-azoxyanisole or using the highly ordered small cluster model. The equivalence relation between Γ and γτ was compared with results obtained from the highly ordered small cluster model. The possibility of adapting the molecular model for the thermodynamic scaling of other dynamic rotational properties was also explored. The rotational diffusion constant and rotational viscosity coefficients, which were calculated using established theoretical and experimental expressions, were rescaled onto master curves with the same scaling parameters. The simulation illustrates the universal nature of the equivalence relation for liquid crystals.

  17. Tackling non-linearities with the effective field theory of dark energy and modified gravity

    NASA Astrophysics Data System (ADS)

    Frusciante, Noemi; Papadomanolakis, Georgios

    2017-12-01

    We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.

  18. Growth of the eye lens: II. Allometric studies.

    PubMed

    Augusteyn, Robert C

    2014-01-01

    The purpose of this study was to examine the ontogeny and phylogeny of lens growth in a variety of species using allometry. Data on the accumulation of wet and/or dry lens weight as a function of bodyweight were obtained for 40 species and subjected to allometric analysis to examine ontogenic growth and compaction. Allometric analysis was also used to compare the maximum adult lens weights for 147 species with the maximum adult bodyweight and to compare lens volumes calculated from wet and dry weights with eye volumes calculated from axial length. Linear allometric relationships were obtained for the comparison of ontogenic lens and bodyweight accumulation. The body mass exponent (BME) decreased with increasing animal size from around 1.0 in small rodents to 0.4 in large ungulates for both wet and dry weights. Compaction constants for the ontogenic growth ranged from 1.00 in birds and reptiles up to 1.30 in mammals. Allometric comparison of maximum lens wet and dry weights with maximum bodyweights also yielded linear plots with a BME of 0.504 for all warm blooded species except primates which had a BME of 0.25. When lens volumes were compared with eye volumes, all species yielded a scaling constant of 0.75 but the proportionality constants for primates and birds were lower. Ontogenic lens growth is fastest, relative to body growth, in small animals and slowest in large animals. Fiber cell compaction takes place throughout life in most species, but not in birds and reptiles. Maximum adult lens size scales with eye size with the same exponent in all species, but birds and primates have smaller lenses relative to eye size than other species. Optical properties of the lens are generated through the combination of variations in the rate of growth, rate of compaction, shape and size.

  19. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  20. Estimating interaction on an additive scale between continuous determinants in a logistic regression model.

    PubMed

    Knol, Mirjam J; van der Tweel, Ingeborg; Grobbee, Diederick E; Numans, Mattijs E; Geerlings, Mirjam I

    2007-10-01

    To determine the presence of interaction in epidemiologic research, typically a product term is added to the regression model. In linear regression, the regression coefficient of the product term reflects interaction as departure from additivity. However, in logistic regression it refers to interaction as departure from multiplicativity. Rothman has argued that interaction estimated as departure from additivity better reflects biologic interaction. So far, literature on estimating interaction on an additive scale using logistic regression only focused on dichotomous determinants. The objective of the present study was to provide the methods to estimate interaction between continuous determinants and to illustrate these methods with a clinical example. and results From the existing literature we derived the formulas to quantify interaction as departure from additivity between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. Bootstrapping was used to calculate the corresponding confidence intervals. To illustrate the theory with an empirical example, data from the Utrecht Health Project were used, with age and body mass index as risk factors for elevated diastolic blood pressure. The methods and formulas presented in this article are intended to assist epidemiologists to calculate interaction on an additive scale between two variables on a certain outcome. The proposed methods are included in a spreadsheet which is freely available at: http://www.juliuscenter.nl/additive-interaction.xls.

  1. Bias in the effective field theory of large scale structures

    DOE PAGES

    Senatore, Leonardo

    2015-11-05

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local inmore » space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k NL and k/k M, where k is the wavenumber of interest, k NL is the wavenumber associated to the non-linear scale, and k M is the comoving wavenumber enclosing the mass of a galaxy.« less

  2. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  3. An efficient flexible-order model for 3D nonlinear water waves

    NASA Astrophysics Data System (ADS)

    Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.

    2009-04-01

    The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.

  4. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  5. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  6. Gravity waves produced by the total solar eclipse of 1 August 2008

    NASA Astrophysics Data System (ADS)

    Marty, Julien; Francis, Dalaudier; Damien, Ponceau; Elisabeth, Blanc; Ulziibat, Munkhuu

    2010-05-01

    Gravity waves are a major component of atmospheric small scale dynamics because of their ability to transport energy and momentum over considerable distances and of their interactions with the mean circulation or other waves. They produce pressure variations which can be detected at the ground by microbarographs. The solar intensity reduction which occurs in the atmosphere during solar eclipses is known to act as a temporary source of large scale gravity waves. Despite decades of research, observational evidence for a characteristic bow-wave response of the atmosphere to eclipse passages remains elusive. A new versatile numerical model (Marty, J. and Dalaudier, F.: Linear spectral numerical model for internal gravity wave propagation. J. Atmos. Sci. (in press)) is presented and applied to the cooling of the atmosphere during a solar eclipse. Calculated solutions appear to be in good agreement with ground pressure fluctuations recorded during the total solar eclipse of 1 August 2008. To the knowledge of the authors, this is the first time that such a result is presented. A three-dimensional linear spectral numerical model is used to propagate internal gravity wave fluctuations in a stably stratified atmosphere. The model is developed to get first-order estimations of gravity wave fluctuations produced by identified sources. It is based on the solutions of the linearized fundamental fluid equations and uses the fully-compressible dispersion relation for inertia-gravity waves. The spectral implementation excludes situations involving spatial variations of buoyancy frequency or background wind. However density stratification variations are taken into account in the calculation of fluctuation amplitudes. In addition to gravity wave packet free propagation, the model handles both impulsive and continuous sources. It can account for spatial and temporal variations of the sources allowing to cover a broad range of physical situations. It is applied to the case of solar eclipses, which are known to produce large-scale bow waves on the Earth's surface. The asymptotic response to a Gaussian thermal forcing travelling at constant velocity as well as the transient response to the 4 December 2002 eclipse are presented. They show good agreement with previous numerical simulations. The model is then applied to the case of the 1 August 2008 solar eclipse. Ground pressure variations produced by the response to the solar intensity reduction in both stratosphere and troposphere are calculated. These synthetic signals are then compared to pressure variations recorded by IMS (International Monitoring System) infrasound stations and a temporary network specifically set up in Western Mongolia for this occasion. The pressure fluctuations produced by the 1 August 2008 solar eclipse are in a frequency band highly disturbed by atmospheric tides. Pressure variations produced by atmospheric tides and synoptic disturbances are thus characterized and removed from the signal. A low frequency wave starting just after the passage of the eclipse is finally brought to light on all stations. Its frequency and amplitude are close to the one calculated with our model, which strongly suggest that this signal was produced by the total solar eclipse.

  7. Gorilla and Orangutan Brains Conform to the Primate Cellular Scaling Rules: Implications for Human Evolution

    PubMed Central

    Herculano-Houzel, Suzana; Kaas, Jon H.

    2011-01-01

    Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. PMID:21228547

  8. Gorilla and orangutan brains conform to the primate cellular scaling rules: implications for human evolution.

    PubMed

    Herculano-Houzel, Suzana; Kaas, Jon H

    2011-01-01

    Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. Copyright © 2011 S. Karger AG, Basel.

  9. A Fast Method to Calculate the Spatial Impulse Response for 1-D Linear Ultrasonic Phased Array Transducers

    PubMed Central

    Zou, Cheng; Sun, Zhenguo; Cai, Dong; Muhammad, Salman; Zhang, Wenzeng; Chen, Qiang

    2016-01-01

    A method is developed to accurately determine the spatial impulse response at the specifically discretized observation points in the radiated field of 1-D linear ultrasonic phased array transducers with great efficiency. In contrast, the previously adopted solutions only optimize the calculation procedure for a single rectangular transducer and required approximation considerations or nonlinear calculation. In this research, an algorithm that follows an alternative approach to expedite the calculation of the spatial impulse response of a rectangular linear array is presented. The key assumption for this algorithm is that the transducer apertures are identical and linearly distributed on an infinite rigid plane baffled with the same pitch. Two points in the observation field, which have the same position relative to two transducer apertures, share the same spatial impulse response that contributed from corresponding transducer, respectively. The observation field is discretized specifically to meet the relationship of equality. The analytical expressions of the proposed algorithm, based on the specific selection of the observation points, are derived to remove redundant calculations. In order to measure the proposed methodology, the simulation results obtained from the proposed method and the classical summation method are compared. The outcomes demonstrate that the proposed strategy can speed up the calculation procedure since it accelerates the speed-up ratio which relies upon the number of discrete points and the number of the array transducers. This development will be valuable in the development of advanced and faster linear ultrasonic phased array systems. PMID:27834799

  10. The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation

    NASA Astrophysics Data System (ADS)

    Lian, Tao

    2016-04-01

    In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.

  11. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  12. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  13. Efficient calculation of cosmological neutrino clustering in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archidiacono, Maria; Hannestad, Steen, E-mail: archi@phys.au.dk, E-mail: sth@phys.au.dk

    2016-06-01

    We study in detail how neutrino perturbations can be followed in linear theory by using only terms up to l =2 in the Boltzmann hierarchy. We provide a new approximation to the third moment and demonstrate that the neutrino power spectrum can be calculated to a precision of better than ∼ 5% for masses up to ∼ 1 eV and k ∼< 10 h /Mpc. The matter power spectrum can be calculated far more precisely and typically at least a factor of a few better than with existing approximations. We then proceed to study how the neutrino power spectrum canmore » be reliably calculated even in the non-linear regime by using the non-linear gravitational potential, sourced by dark matter overdensities, as it is derived from semi-analytic methods based on N -body simulations in the Boltzmann evolution hierarchy. Our results agree extremely well with results derived from N -body simulations that include cold dark matter and neutrinos as independent particles with different properties.« less

  14. How Darcy's equation is linked to the linear reservoir at catchment scale

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2017-04-01

    In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.

  15. Groundwater recharge assessment at local and episodic scale in a soil mantled perched karst aquifer in southern Italy

    USGS Publications Warehouse

    Allocca, V.; De Vita, P.; Manna, F.; Nimmo, John R.

    2015-01-01

    Depending on the seasonally varying air temperature, evapotranspiration, and precipitation patterns, calculated values of RPR varied between 35% and 97% among the individual episodes. A multiple linear correlation of the RPR with both the average intensity of recharging rainfall events and the antecedent soil water content was calculated. Given the relatively easy measurability of precipitation and soil water content, such an empirical model would have great hydrogeological and practical utility. It would facilitate short-term forecasting of recharge in karst aquifers of the Mediterranean region and other aquifers with similar hydrogeological characteristics. By establishing relationships between the RPR and climate-dependent variables such as average storm intensity, it would facilitate prediction of climate-change effects on groundwater recharge. The EMR methodology could further be applied to other aquifers for evaluating the relationship of recharge to various hydrometeorological and hydrogeological processes.

  16. On the calculation of turbulent heat transport downstream from an abrupt pipe expansion

    NASA Technical Reports Server (NTRS)

    Chieng, C. C.; Launder, B. E.

    1980-01-01

    A numerical study is reported of flow and heat transfer in the separated flow region created by an abrupt pipe expansion. Computations employed an adaptation of the TEACH-2E computer program with the standard model of turbulence. Emphasis is given to the simulation, from both a physical and numerical viewpoint, of the region in the immediate vicinity of the wall where turbulent transport gives way to molecular conduction and diffusion. Wall resistance laws or wall functions used to bridge this near-wall region are based on the idea that, beyond the viscous sublayer, the turbulent length scale is universal, increasing linearly with distance from the wall. Predictions of expermental data for a diameter ratio of 0.54 show generally encouraging agreement with experiment. At a diameter of 0.43 different trends are discernible between measurement and calculation though this appears to be due to effects unconnected with the wall region studied.

  17. Framework for scalable adsorbate–adsorbate interaction models

    DOE PAGES

    Hoffmann, Max J.; Medford, Andrew J.; Bligaard, Thomas

    2016-06-02

    Here, we present a framework for physically motivated models of adsorbate–adsorbate interaction between small molecules on transition and coinage metals based on modifications to the substrate electronic structure due to adsorption. We use this framework to develop one model for transition and one for coinage metal surfaces. The models for transition metals are based on the d-band center position, and the models for coinage metals are based on partial charges. The models require no empirical parameters, only two first-principles calculations per adsorbate as input, and therefore scale linearly with the number of reaction intermediates. By theory to theory comparison withmore » explicit density functional theory calculations over a wide range of adsorbates and surfaces, we show that the root-mean-squared error for differential adsorption energies is less than 0.2 eV for up to 1 ML coverage.« less

  18. Natural bond orbital analysis, electronic structure, non-linear properties and vibrational spectral analysis of L-histidinium bromide monohydrate: a density functional theory.

    PubMed

    Sajan, D; Joseph, Lynnette; Vijayan, N; Karabacak, M

    2011-10-15

    The spectroscopic properties of the crystallized nonlinear optical molecule L-histidinium bromide monohydrate (abbreviated as L-HBr-mh) have been recorded and analyzed by FT-IR, FT-Raman and UV techniques. The equilibrium geometry, vibrational wavenumbers and the first order hyperpolarizability of the crystal were calculated with the help of density functional theory computations. The optimized geometric bond lengths and bond angles obtained by using DFT (B3LYP/6-311++G(d,p)) show good agreement with the experimental data. The complete assignments of fundamental vibrations were performed on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. The natural bond orbital (NBO) analysis confirms the occurrence of strong intra and intermolecular N-H⋯O hydrogen bonding. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  20. Nonlinear evolution of baryon acoustic oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crocce, Martin; Institut de Ciencies de l'Espai, IEEC-CSIC, Campus UAB, Facultat de Ciencies, Torre C5 par-2, Barcelona 08193; Scoccimarro, Roman

    2008-01-15

    We study the nonlinear evolution of baryon acoustic oscillations in the dark matter power spectrum and the correlation function using renormalized perturbation theory. In a previous paper we showed that renormalized perturbation theory successfully predicts the damping of acoustic oscillations; here we extend our calculation to the enhancement of power due to mode coupling. We show that mode coupling generates additional oscillations that are out of phase with those in the linear spectrum, leading to shifts in the scales of oscillation nodes defined with respect to a smooth spectrum. When Fourier transformed, these out-of-phase oscillations induce percent-level shifts in themore » acoustic peak of the two-point correlation function. We present predictions for these shifts as a function of redshift; these should be considered as a robust lower limit to the more realistic case that includes, in addition, redshift distortions and galaxy bias. We show that these nonlinear effects occur at very large scales, leading to a breakdown of linear theory at scales much larger than commonly thought. We discuss why virialized halo profiles are not responsible for these effects, which can be understood from basic physics of gravitational instability. Our results are in excellent agreement with numerical simulations, and can be used as a starting point for modeling baryon acoustic oscillations in future observations. To meet this end, we suggest a simple physically motivated model to correct for the shifts caused by mode coupling.« less

  1. Superfluidity in Strongly Interacting Fermi Systems with Applications to Neutron Stars

    NASA Astrophysics Data System (ADS)

    Khodel, Vladimir

    The rotational dynamics and cooling history of neutron stars is influenced by the superfluid properties of nucleonic matter. In this thesis a novel separation technique is applied to the analysis of the gap equation for neutron matter. It is shown that the problem can be recast into two tasks: solving a simple system of linear integral equations for the shape functions of various components of the gap function and solving a system of non-linear algebraic equations for their scale factors. Important simplifications result from the fact that the ratio of the gap amplitude to the Fermi energy provides a small parameter in this problem. The relationship between the analytic structure of the shape functions and the density interval for the existence of superfluid gap is discussed. It is shown that in 1S0 channel the position of the first zero of the shape function gives an estimate of the upper critical density. The relation between the resonant behavior of the two-neutron interaction in this channel and the density dependence of the gap is established. The behavior of the gap in the limits of low and high densities is analyzed. Various approaches to calculation of the scale factors are considered: model cases, angular averaging, and perturbation theory. An optimization-based approach is proposed. The shape functions and scale factors for Argonne υ14 and υ18 potentials are determined in singlet and triplet channels. Dependence of the solution on the value of effective mass and medium polarization is studied.

  2. On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure

    DOE PAGES

    Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo

    2018-02-01

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less

  3. On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less

  4. [Evaluation of pendulum testing of spasticity].

    PubMed

    Le Cavorzin, P; Hernot, X; Bartier, O; Carrault, G; Chagneau, F; Gallien, P; Allain, H; Rochcongar, P

    2002-11-01

    To identify valid measurements of spasticity derived from the pendulum test of the leg in a representative population of spastic patients. Pendulum testing was performed in 15 spastic and 10 matched healthy subjects. The reflex-mediated torque evoked in quadriceps femoris, as well as muscle mechanical parameters (viscosity and elasticity), were calculated using mathematical modelling. Correlation with the two main measures derived from the pendulum test reported in the literature (the Relaxation Index and the area under the curve) was calculated in order to select the most valid. Among mechanical parameters, only viscosity was found to be significantly higher in the spastic group. As expected, the computed integral of the reflex-mediated torque was found to be larger in spastics than in healthy subjects. A significant non-linear (logarithmic) correlation was found between the clinically-assessed muscle spasticity (Ashworth grading) and the computed reflex-mediated torque, emphasising the non-linear behaviour of this scale. Among measurements derived from the pendulum test which are proposed in the literature for routine estimation of spasticity, the Relaxation Index exhibited an unsuitable U-shaped pattern of variation with increasing reflex-mediated torque. On the opposite, the area under the curve revealed a linear regression, which is more convenient for routine estimation of spasticity. The pendulum test of the leg is a simple technique for the assessment of spastic hypertonia. However, the measurement generally used in the literature (the Relaxation Index) exhibits serious limitations, and would benefit to be replaced by more valid measures, such as the area under the goniometric curve, especially for the assessment of therapeutics.

  5. An Inverse Modeling Approach to Estimating Phytoplankton Pigment Concentrations from Phytoplankton Absorption Spectra

    NASA Technical Reports Server (NTRS)

    Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.

    2011-01-01

    Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.

  6. Elastic interaction of hydrogen atoms on graphene: A multiscale approach from first principles to continuum elasticity

    NASA Astrophysics Data System (ADS)

    Branicio, Paulo S.; Vastola, Guglielmo; Jhon, Mark H.; Sullivan, Michael B.; Shenoy, Vivek B.; Srolovitz, David J.

    2016-10-01

    The deformation of graphene due to the chemisorption of hydrogen atoms on its surface and the long-range elastic interaction between hydrogen atoms induced by these deformations are investigated using a multiscale approach based on first principles, empirical interactions, and continuum modeling. Focus is given to the intrinsic low-temperature structure and interactions. Therefore, all calculations are performed at T =0 , neglecting possible temperature or thermal fluctuation effects. Results from different methods agree well and consistently describe the local deformation of graphene on multiple length scales reaching 500 Å . The results indicate that the elastic interaction mediated by this deformation is significant and depends on the deformation of the graphene sheet both in and out of plane. Surprisingly, despite the isotropic elasticity of graphene, within the linear elastic regime, atoms elastically attract or repel each other depending on (i) the specific site they are chemisorbed; (ii) the relative position of the sites; (iii) and if they are on the same or on opposite surface sides. The interaction energy sign and power-law decay calculated from molecular statics agree well with theoretical predictions from linear elasticity theory, considering in-plane or out-of-plane deformations as a superposition or in a coupled nonlinear approach. Deviations on the exact power law between molecular statics and the linear elastic analysis are evidence of the importance of nonlinear effects on the elasticity of monolayer graphene. These results have implications for the understanding of the generation of clusters and regular formations of hydrogen and other chemisorbed atoms on graphene.

  7. Energy Level Alignment at the Interface between Linear-Structured Benzenediamine Molecules and Au(111) Surface

    NASA Astrophysics Data System (ADS)

    Li, Guo; Rangel, Tonatiuh; Liu, Zhenfei; Cooper, Valentino; Neaton, Jeffrey

    Using density functional theory with model self-energy corrections, we calculate the adsorption energetics and geometry, and the energy level alignment of benzenediamine (BDA) molecules adsorbed on Au(111) surfaces. Our calculations show that linear structures of BDA, stabilized via hydrogen bonds between amine groups, are energetically more favorable than monomeric phases. Moreover, our self-energy-corrected calculations of energy level alignment show that the highest occupied molecular orbital energy of the BDA linear structure is deeper relative to the Fermi level relative to the isolated monomer and agrees well with the values measured with photoemission spectroscopy. This work supported by DOE.

  8. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  9. Computation of shear-induced collective-diffusivity in emulsions

    NASA Astrophysics Data System (ADS)

    Malipeddi, Abhilash Reddy; Sarkar, Kausik

    2017-11-01

    The shear-induced collective-diffusivity of drops in an emulsion is calculated through simulation. A front-tracking finite difference method is used to integrate the Navier-Stokes equations. When a cloud of drops is subjected to shear flow, after a certain time, the width of the cloud increases with the 1/3 power of time. This scaling of drop-cloud-width with time is characteristic of (sub-)diffusion that arises from irreversible two-drop interactions. The collective diffusivity is calculated from this relationship. A feature of the procedure adopted here is the modest computational requirement, wherein, a few drops ( 70) in shear for short time ( 70 strain) is found to be sufficient to get a good estimate. As far as we know, collective-diffusivity has not been calculated for drops through simulation till now. The computed values match with experimental measurements reported in the literature. The diffusivity in emulsions is calculated for a range of Capillary (Ca) and Reynolds (Re) numbers. It is found to be a unimodal function of Ca , similar to self-diffusivity. A sub-linear increase of the diffusivity with Re is seen for Re < 5 . This work has been limited to a viscosity matched case.

  10. Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis

    PubMed Central

    Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.

    2015-01-01

    Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223

  11. Barrier island morphodynamic classification based on lidar metrics for north Assateague Island, Maryland

    USGS Publications Warehouse

    Brock, John C.; Krabill, William; Sallenger, Asbury H.

    2004-01-01

    In order to reap the potential of airborne lidar surveys to provide geological information useful in understanding coastal sedimentary processes acting on various time scales, a new set of analysis methods are needed. This paper presents a multi-temporal lidar analysis of north Assateague Island, Maryland, and demonstrates the calculation of lidar metrics that condense barrier island morphology and morphological change into attributed linear features that may be used to analyze trends in coastal evolution. The new methods proposed in this paper are also of significant practical value, because lidar metric analysis reduces large volumes of point elevations into linear features attributed with essential morphological variables that are ideally suited for inclusion in Geographic Information Systems. A morphodynamic classification of north Assategue Island for a recent 10 month time period that is based on the recognition of simple patterns described by lidar change metrics is presented. Such morphodynamic classification reveals the relative magnitude and the fine scale alongshore variation in the importance of coastal changes over the study area during a defined time period. More generally, through the presentation of this morphodynamic classification of north Assateague Island, the value of lidar metrics in both examining large lidar data sets for coherent trends and in building hypotheses regarding processes driving barrier evolution is demonstrated

  12. The linear tearing instability in three dimensional, toroidal gyro-kinetic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornsby, W. A., E-mail: william.hornsby@ipp.mpg.de; Migliano, P.; Buchholz, R.

    2015-02-15

    Linear gyro-kinetic simulations of the classical tearing mode in three-dimensional toroidal geometry were performed using the global gyro-kinetic turbulence code, GKW. The results were benchmarked against a cylindrical ideal MHD and analytical theory calculations. The stability, growth rate, and frequency of the mode were investigated by varying the current profile, collisionality, and the pressure gradients. Both collisionless and semi-collisional tearing modes were found with a smooth transition between the two. A residual, finite, rotation frequency of the mode even in the absence of a pressure gradient is observed, which is attributed to toroidal finite Larmor-radius effects. When a pressure gradientmore » is present at low collisionality, the mode rotates at the expected electron diamagnetic frequency. However, the island rotation reverses direction at high collisionality. The growth rate is found to follow a η{sup 1∕7} scaling with collisional resistivity in the semi-collisional regime, closely following the semi-collisional scaling found by Fitzpatrick. The stability of the mode closely follows the stability analysis as performed by Hastie et al. using the same current and safety factor profiles but for cylindrical geometry, however, here a modification due to toroidal coupling and pressure effects is seen.« less

  13. Design of a quasi-flat linear permanent magnet generator for pico-scale wave energy converter in south coast of Yogyakarta, Indonesia

    NASA Astrophysics Data System (ADS)

    Azhari, Budi; Prawinnetou, Wassy; Hutama, Dewangga Adhyaksa

    2017-03-01

    Indonesia has several potential ocean energies to utilize. One of them is tidal wave energy, which the potential is about 49 GW. To convert the tidal wave energy to electricity, linear permanent magnet generator (LPMG) is considered as the best appliance. In this paper, a pico-scale tidal wave power converter was designed using quasi-flat LPMG. The generator was meant to be applied in southern coast of Yogyakarta, Indonesia and was expected to generate 1 kW output. First, a quasi-flat LPMG was designed based on the expected output power and the wave characteristic at the placement site. The design was then simulated using finite element software of FEMM. Finally, the output values were calculated and the output characteristics were analyzed. The results showed that the designed power plant was able to produce output power of 725.78 Wp for each phase, with electrical efficiency of 64.5%. The output characteristics of the LPMG: output power would increase as the average wave height or wave period increases. Besides, the efficiency would increase if the external load resistance increases. Meanwhile the output power of the generator would be maximum at load resistance equals 11 Ω.

  14. Estimation of median human lethal radiation dose computed from data on occupants of reinforced concrete structures in Nagasaki, Japan.

    PubMed

    Levin, S G; Young, R W; Stohler, R L

    1992-11-01

    This paper presents an estimate of the median lethal dose for humans exposed to total-body irradiation and not subsequently treated for radiation sickness. The median lethal dose was estimated from calculated doses to young adults who were inside two reinforced concrete buildings that remained standing in Nagasaki after the atomic detonation. The individuals in this study, none of whom have previously had calculated doses, were identified from a detailed survey done previously. Radiation dose to the bone marrow, which was taken as the critical radiation site, was calculated for each individual by the Engineering Physics and Mathematics Division of the Oak Ridge National Laboratory using a new three-dimensional discrete-ordinates radiation transport code that was developed and validated for this study using the latest site geometry, radiation yield, and spectra data. The study cohort consisted of 75 individuals who either survived > 60 d or died between the second and 60th d postirradiation due to radiation injury, without burns or other serious injury. Median lethal dose estimates were calculated using both logarithmic (2.9 Gy) and linear (3.4 Gy) dose scales. Both calculations, which met statistical validity tests, support previous estimates of the median lethal dose based solely on human data, which cluster around 3 Gy.

  15. Using scaling to compute moments of inertia of symmetric objects

    NASA Astrophysics Data System (ADS)

    Ricardo, Bernard

    2015-09-01

    Moment of inertia is a very important property in the study of rotational mechanics. The concept of moment of inertia is analogous to mass in the linear motion, and its calculation is routinely done through integration. This paper provides an alternative way to compute moments of inertia of rigid bodies of regular shape using their symmetrical property. This approach will be very useful and preferred for teaching rotational mechanics at the undergraduate level, as it does not require the knowledge or the application of calculus. The seven examples provided in this paper will help readers to understand clearly how to use the method.

  16. Nonlinear calculations of the time evolution of black hole accretion disks

    NASA Technical Reports Server (NTRS)

    Luo, C.

    1994-01-01

    Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).

  17. An assessment of an F2 or N2O4 atmospheric injection from an aborted space shuttle mission

    NASA Technical Reports Server (NTRS)

    Watson, R. T.; Smokler, P. E.; Demore, W. B.

    1978-01-01

    Assuming a linear relationship between the stratosphere loading of NOx and the magnitude of the ozone perturbation, the change in ozone expected to result from space shuttle ejection of N2O4 was calculated based on the ozone change that is predicted for the (much greater) NOx input that would accompany large-scale operations of SSTs. Stratospheric fluorine reactions were critically reviewed to evaluate the magnitude of fluorine induced ozone destruction relative to the reduction that would be caused by addition of an equal amount of chlorine. The predicted effect on stratospheric ozone is vanishingly small.

  18. Finite Larmor radius effects on the (m = 2, n = 1) cylindrical tearing mode

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Chowdhury, J.; Parker, S. E.; Wan, W.

    2015-04-01

    New field solvers are developed in the gyrokinetic code GEM [Chen and Parker, J. Comput. Phys. 220, 839 (2007)] to simulate low-n modes. A novel discretization is developed for the ion polarization term in the gyrokinetic vorticity equation. An eigenmode analysis with finite Larmor radius effects is developed to study the linear resistive tearing mode. The mode growth rate is shown to scale with resistivity as γ ˜ η1/3, the same as the semi-collisional regime in previous kinetic treatments [Drake and Lee, Phys. Fluids 20, 1341 (1977)]. Tearing mode simulations with gyrokinetic ions are verified with the eigenmode calculation.

  19. Graphene nanoFlakes with large spin.

    PubMed

    Wang, Wei L; Meng, Sheng; Kaxiras, Efthimios

    2008-01-01

    We investigate, using benzenoid graph theory and first-principles calculations, the magnetic properties of arbitrarily shaped finite graphene fragments to which we refer as graphene nanoflakes (GNFs). We demonstrate that the spin of a GNF depends on its shape due to topological frustration of the pi-bonds. For example, a zigzag-edged triangular GNF has a nonzero net spin, resembling an artificial ferrimagnetic atom, with the spin value scaling with its linear size. In general, the principle of topological frustration can be used to introduce large net spin and interesting spin distributions in graphene. These results suggest an avenue to nanoscale spintronics through the sculpting of graphene fragments.

  20. Calculation of the distributed loads on the blades of individual multiblade propellers in axial flow using linear and nonlinear lifting surface theories

    NASA Technical Reports Server (NTRS)

    Pesetskaya, N. N.; Timofeev, I. YA.; Shipilov, S. D.

    1988-01-01

    In recent years much attention has been given to the development of methods and programs for the calculation of the aerodynamic characteristics of multiblade, saber-shaped air propellers. Most existing methods are based on the theory of lifting lines. Elsewhere, the theory of a lifting surface is used to calculate screw and lifting propellers. In this work, methods of discrete eddies are described for the calculation of the aerodynamic characteristics of propellers using the linear and nonlinear theories of lifting surfaces.

  1. Equilibrium Phase Behavior of the Square-Well Linear Microphase-Forming Model.

    PubMed

    Zhuang, Yuan; Charbonneau, Patrick

    2016-07-07

    We have recently developed a simulation approach to calculate the equilibrium phase diagram of particle-based microphase formers. Here, this approach is used to calculate the phase behavior of the square-well linear model for different strengths and ranges of the linear long-range repulsive component. The results are compared with various theoretical predictions for microphase formation. The analysis further allows us to better understand the mechanism for microphase formation in colloidal suspensions.

  2. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  3. A Comparison of Curing Process-Induced Residual Stresses and Cure Shrinkage in Micro-Scale Composite Structures with Different Constitutive Laws

    NASA Astrophysics Data System (ADS)

    Li, Dongna; Li, Xudong; Dai, Jianfeng; Xi, Shangbin

    2018-02-01

    In this paper, three kinds of constitutive laws, elastic, "cure hardening instantaneously linear elastic (CHILE)" and viscoelastic law, are used to predict curing process-induced residual stress for the thermoset polymer composites. A multi-physics coupling finite element analysis (FEA) model implementing the proposed three approaches is established in COMSOL Multiphysics-Version 4.3b. The evolution of thermo-physical properties with temperature and degree of cure (DOC), which improved the accuracy of numerical simulations, and cure shrinkage are taken into account for the three models. Subsequently, these three proposed constitutive models are implemented respectively in a 3D micro-scale composite laminate structure. Compared the differences between these three numerical results, it indicates that big error in residual stress and cure shrinkage generates by elastic model, but the results calculated by the modified CHILE model are in excellent agreement with those estimated by the viscoelastic model.

  4. Isolating relativistic effects in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bonvin, Camille

    2014-12-01

    We present a fully relativistic calculation of the observed galaxy number counts in the linear regime. We show that besides the density fluctuations and redshift-space distortions, various relativistic effects contribute to observations at large scales. These effects all have the same physical origin: they result from the fact that our coordinate system, namely the galaxy redshift and the incoming photons’ direction, is distorted by inhomogeneities in our Universe. We then discuss the impact of the relativistic effects on the angular power spectrum and on the two-point correlation function in configuration space. We show that the latter is very well adapted to isolate the relativistic effects since it naturally makes use of the symmetries of the different contributions. In particular, we discuss how the Doppler effect and the gravitational redshift distortions can be isolated by looking for a dipole in the cross-correlation function between a bright and a faint population of galaxies.

  5. Development of building energy asset rating using stock modelling in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Goel, Supriya; Makhmalbaf, Atefe

    2016-01-29

    The US Building Energy Asset Score helps building stakeholders quickly gain insight into the efficiency of building systems (envelope, electrical and mechanical systems). A robust, easy-to-understand 10-point scoring system was developed to facilitate an unbiased comparison of similar building types across the country. The Asset Score does not rely on a database or specific building baselines to establish a rating. Rather, distributions of energy use intensity (EUI) for various building use types were constructed using Latin hypercube sampling and converted to a series of stepped linear scales to score buildings. A score is calculated based on the modelled source EUImore » after adjusting for climate. A web-based scoring tool, which incorporates an analytical engine and a simulation engine, was developed to standardize energy modelling and reduce implementation cost. This paper discusses the methodology used to perform several hundred thousand building simulation runs and develop the scoring scales.« less

  6. Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.

    PubMed

    Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita

    2014-04-01

    Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.

  7. Strength and scales of itinerant spin fluctuations in 3 d paramagnetic metals

    DOE PAGES

    Wysocki, Aleksander L.; Kutepov, Andrey; Antropov, Vladimir P.

    2016-10-10

    The full spin density fluctuations (SDF) spectra in 3d paramagnetic metals are analyzed from first principles using the linear response technique. Using the calculated complete wave vector and energy dependence of the dynamic spin susceptibility, we obtain the most important, but elusive, characteristic of SDF in solids: on-site spin correlator (SC). We demonstrate that the SDF have a mixed character consisting of interacting collective and single-particle excitations of similar strength spreading continuously over the entire Brillouin zone and a wide energy range up to femtosecond time scales. These excitations cannot be adiabatically separated and their intrinsically multiscale nature should alwaysmore » be taken into account for a proper description of metallic systems. Altogether, in all studied systems, despite the lack of local moment, we found a very large SC resulting in an effective fluctuating moment of the order of several Bohr magnetons.« less

  8. Strength and scales of itinerant spin fluctuations in 3 d paramagnetic metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wysocki, Aleksander L.; Kutepov, Andrey; Antropov, Vladimir P.

    The full spin density fluctuations (SDF) spectra in 3d paramagnetic metals are analyzed from first principles using the linear response technique. Using the calculated complete wave vector and energy dependence of the dynamic spin susceptibility, we obtain the most important, but elusive, characteristic of SDF in solids: on-site spin correlator (SC). We demonstrate that the SDF have a mixed character consisting of interacting collective and single-particle excitations of similar strength spreading continuously over the entire Brillouin zone and a wide energy range up to femtosecond time scales. These excitations cannot be adiabatically separated and their intrinsically multiscale nature should alwaysmore » be taken into account for a proper description of metallic systems. Altogether, in all studied systems, despite the lack of local moment, we found a very large SC resulting in an effective fluctuating moment of the order of several Bohr magnetons.« less

  9. The Challenge of Electrochemical Ammonia Synthesis: A New Perspective on the Role of Nitrogen Scaling Relations.

    PubMed

    Montoya, Joseph H; Tsai, Charlie; Vojvodic, Aleksandra; Nørskov, Jens K

    2015-07-08

    The electrochemical production of NH3 under ambient conditions represents an attractive prospect for sustainable agriculture, but electrocatalysts that selectively reduce N2 to NH3 remain elusive. In this work, we present insights from DFT calculations that describe limitations on the low-temperature electrocatalytic production of NH3 from N2 . In particular, we highlight the linear scaling relations of the adsorption energies of intermediates that can be used to model the overpotential requirements in this process. By using a two-variable description of the theoretical overpotential, we identify fundamental limitations on N2 reduction analogous to those present in processes such as oxygen evolution. Using these trends, we propose new strategies for catalyst design that may help guide the search for an electrocatalyst that can achieve selective N2 reduction. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Generation of Magnetohydrodynamic Waves in Low Solar Atmospheric Flux Tubes by Photospheric Motions

    NASA Astrophysics Data System (ADS)

    Mumford, S. J.; Fedun, V.; Erdélyi, R.

    2015-01-01

    Recent ground- and space-based observations reveal the presence of small-scale motions between convection cells in the solar photosphere. In these regions, small-scale magnetic flux tubes are generated via the interaction of granulation motion and the background magnetic field. This paper studies the effects of these motions on magnetohydrodynamic (MHD) wave excitation from broadband photospheric drivers. Numerical experiments of linear MHD wave propagation in a magnetic flux tube embedded in a realistic gravitationally stratified solar atmosphere between the photosphere and the low choromosphere (above β = 1) are performed. Horizontal and vertical velocity field drivers mimic granular buffeting and solar global oscillations. A uniform torsional driver as well as Archimedean and logarithmic spiral drivers mimic observed torsional motions in the solar photosphere. The results are analyzed using a novel method for extracting the parallel, perpendicular, and azimuthal components of the perturbations, which caters to both the linear and non-linear cases. Employing this method yields the identification of the wave modes excited in the numerical simulations and enables a comparison of excited modes via velocity perturbations and wave energy flux. The wave energy flux distribution is calculated to enable the quantification of the relative strengths of excited modes. The torsional drivers primarily excite Alfvén modes (≈60% of the total flux) with small contributions from the slow kink mode, and, for the logarithmic spiral driver, small amounts of slow sausage mode. The horizontal and vertical drivers primarily excite slow kink or fast sausage modes, respectively, with small variations dependent upon flux surface radius.

  11. Adsorption of Poly(methyl methacrylate) on Concave Al2O3 Surfaces in Nanoporous Membranes

    PubMed Central

    Nunnery, Grady; Hershkovits, Eli; Tannenbaum, Allen; Tannenbaum, Rina

    2009-01-01

    The objective of this study was to determine the influence of polymer molecular weight and surface curvature on the adsorption of polymers onto concave surfaces. Poly(methyl methacrylate) (PMMA) of various molecular weights was adsorbed onto porous aluminum oxide membranes having various pore sizes, ranging from 32 to 220 nm. The surface coverage, expressed as repeat units per unit surface area, was observed to vary linearly with molecular weight for molecular weights below ~120 000 g/mol. The coverage was independent of molecular weight above this critical molar mass, as was previously reported for the adsorption of PMMA on convex surfaces. Furthermore, the coverage varied linearly with pore size. A theoretical model was developed to describe curvature-dependent adsorption by considering the density gradient that exists between the surface and the edge of the adsorption layer. According to this model, the density gradient of the adsorbed polymer segments scales inversely with particle size, while the total coverage scales linearly with particle size, in good agreement with experiment. These results show that the details of the adsorption of polymers onto concave surfaces with cylindrical geometries can be used to calculate molecular weight (below a critical molecular weight) if pore size is known. Conversely, pore size can also be determined with similar adsorption experiments. Most significantly, for polymers above a critical molecular weight, the precise molecular weight need not be known in order to determine pore size. Moreover, the adsorption developed and validated in this work can be used to predict coverage also onto surfaces with different geometries. PMID:19415910

  12. A scaling theory for linear systems

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.; Krishnaprasad, P. S.

    1980-01-01

    A theory of scaling for rational (transfer) functions in terms of transformation groups is developed. Two different four-parameter scaling groups which play natural roles in studying linear systems are identified and the effect of scaling on Fisher information and related statistical measures in system identification are studied. The scalings considered include change of time scale, feedback, exponential scaling, magnitude scaling, etc. The scaling action of the groups studied is tied to the geometry of transfer functions in a rather strong way as becomes apparent in the examination of the invariants of scaling. As a result, the scaling process also provides new insight into the parameterization question for rational functions.

  13. Assessment of multi-frequency electromagnetic induction for determining soil moisture patterns at the hillslope scale

    NASA Astrophysics Data System (ADS)

    Tromp-van Meerveld, H. J.; McDonnell, J. J.

    2009-04-01

    SummaryHillslopes are fundamental landscape units, yet represent a difficult scale for measurements as they are well-beyond our traditional point-scale techniques. Here we present an assessment of electromagnetic induction (EM) as a potential rapid and non-invasive method to map soil moisture patterns at the hillslope scale. We test the new multi-frequency GEM-300 for spatially distributed soil moisture measurements at the well-instrumented Panola hillslope. EM-based apparent conductivity measurements were linearly related to soil moisture measured with the Aqua-pro capacitance sensor below a threshold conductivity and represented the temporal patterns in soil moisture well. During spring rainfall events that wetted only the surface soil layers the apparent conductivity measurements explained the soil moisture dynamics at depth better than the surface soil moisture dynamics. All four EM frequencies (7.290, 9.090, 11.250, and 14.010 kHz) were highly correlated and linearly related to each other and could be used to predict soil moisture. This limited our ability to use the four different EM frequencies to obtain a soil moisture profile with depth. The apparent conductivity patterns represented the observed spatial soil moisture patterns well when the individually fitted relationships between measured soil moisture and apparent conductivity were used for each measurement point. However, when the same (master) relationship was used for all measurement locations, the soil moisture patterns were smoothed and did not resemble the observed soil moisture patterns very well. In addition the range in calculated soil moisture values was reduced compared to observed soil moisture. Part of the smoothing was likely due to the much larger measurement area of the GEM-300 compared to the soil moisture measurements.

  14. Efficient mixing scheme for self-consistent all-electron charge density

    NASA Astrophysics Data System (ADS)

    Shishidou, Tatsuya; Weinert, Michael

    2015-03-01

    In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.

  15. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  16. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  17. Quantifying Melt Ponds in the Beaufort MIZ using Linear Support Vector Machines from High Resolution Panchromatic Images

    NASA Astrophysics Data System (ADS)

    Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.

    2017-12-01

    Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.

  18. Explicit crystal host effects on excited state properties of linear polyacenes: towards a room-temperature maser

    NASA Astrophysics Data System (ADS)

    Charlton, Robert; Bogatko, Stuart; Zuehlsdorff, Tim; Hine, Nicholas; Horsfield, Andrew; Haynes, Peter

    Maser technology has been held back for decades by the impracticality of the operating conditions of traditional masing devices, such as cryogenic freezing and strong magnetic fields. Recently it has been experimentally demonstrated that pentacene in p-terphenyl can act as a viable solid-state room-temperature maser by exploiting the alignment of the low-lying singlet and triplet excited states of pentacene. To understand the operation of this device from first principles, an ab initio study of the excitonic properties of pentacene in p-terphenyl has been carried out using time-dependent density functional theory (TDDFT), implemented in the linear-scaling ONETEP software (www.onetep.org). In particular, we focus on the impact that the wider crystal has on the localised pentacene excitations by performing an explicit DFT treatment of the p-terphenyl environment. We demonstrate the importance of explicit crystal host effects in calculating the excitation energies of pentacene in p-terphenyl, providing important information for the operation of the maser. We then use this same approach to test the viability of other linear polyacenes as maser candidates as a screening step before experimental testing.

  19. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  20. An O({radical}nL) primal-dual affine scaling algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Siming

    1994-12-31

    We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.

  1. An efficient and near linear scaling pair natural orbital based local coupled cluster method.

    PubMed

    Riplinger, Christoph; Neese, Frank

    2013-01-21

    In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.

  2. Computer-aided mass detection in mammography: False positive reduction via gray-scale invariant ranklet texture features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masotti, Matteo; Lanconelli, Nico; Campanini, Renato

    In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with theirmore » gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks when compared to the previous one. Specifically, at 60%, 65%, and 70% per-mammogram sensitivity, the new CAD system achieves 0.50, 0.68, and 0.92 FP marks per mammogram, whereas at 70%, 75%, and 80% per-case sensitivity it achieves 0.37, 0.48, and 0.71 FP marks per mammogram, respectively. Conversely, at the same sensitivities, the previous CAD system reached 0.71, 0.87, and 1.15 FP marks per mammogram, and 0.57, 0.73, and 0.92 FPs per mammogram. Also, statistical significance of the difference between the two per-mammogram and per-case FROC curves is demonstrated by the p-value<0.001 returned by jackknife FROC analysis performed on the two CAD systems.« less

  3. Growth rate of the linear Richtmyer-Meshkov instability when a shock is reflected

    NASA Astrophysics Data System (ADS)

    Wouchuk, J. G.

    2001-05-01

    An analytic model is presented to calculate the growth rate of the linear Richtmyer-Meshkov instability in the shock-reflected case. The model allows us to calculate the asymptotic contact surface perturbation velocity for any value of the incident shock intensity, arbitrary fluids compressibilities, and for any density ratio at the interface. The growth rate comes out as the solution of a system of two coupled functional equations and is expressed formally as an infinite series. The distinguishing feature of the procedure shown here is the high speed of convergence of the intermediate calculations. There is excellent agreement with previous linear simulations and experiments done in shock tubes.

  4. Hele-Shaw scaling properties of low-contrast Saffman-Taylor flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiFrancesco, M. W.; Maher, J. V.

    1989-07-01

    We have measured variations of Saffman-Taylor flows by changingdimensionless surface tension /ital B/ alone and by changing /ital B/ inconjunction with changes in dimensionless viscosity contrast /ital A/. Ourlow-aspect-ratio cell permits close study of the linear- and earlynonlinear-flow regimes. Our critical binary-liquid sample allows study of verylow values of /ital A/. The predictions of linear stability analysis work wellfor predicting which length scales are important, but discrepancies areobserved for growth rates. We observe an empirical scaling law for growth ofthe Fourier modes of the patterns in the linear regime. The observed frontpropagation velocity for side-wall disturbances is constantly 2+-1in dimensionlessmore » units, a value consistent with the predictions of Langer andof van Saarloos. Patterns in both the linear and nonlinear regimes collapseimpressively under the scaling suggested by the Hele-Shaw equations. Violationsof scaling due to wetting phenomena are not evident here, presumably becausethe wetting properties of the two phases of the critical binary liquid are sosimilar; thus direct comparison with large-scale Hele-Shaw simulations shouldbe meaningful.« less

  5. Influence of landscape-scale factors in limiting brook trout populations in Pennsylvania streams

    USGS Publications Warehouse

    Kocovsky, P.M.; Carline, R.F.

    2006-01-01

    Landscapes influence the capacity of streams to produce trout through their effect on water chemistry and other factors at the reach scale. Trout abundance also fluctuates over time; thus, to thoroughly understand how spatial factors at landscape scales affect trout populations, one must assess the changes in populations over time to provide a context for interpreting the importance of spatial factors. We used data from the Pennsylvania Fish and Boat Commission's fisheries management database to investigate spatial factors that affect the capacity of streams to support brook trout Salvelinus fontinalis and to provide models useful for their management. We assessed the relative importance of spatial and temporal variation by calculating variance components and comparing relative standard errors for spatial and temporal variation. We used binary logistic regression to predict the presence of harvestable-length brook trout and multiple linear regression to assess the mechanistic links between landscapes and trout populations and to predict population density. The variance in trout density among streams was equal to or greater than the temporal variation for several streams, indicating that differences among sites affect population density. Logistic regression models correctly predicted the absence of harvestable-length brook trout in 60% of validation samples. The r 2-value for the linear regression model predicting density was 0.3, indicating low predictive ability. Both logistic and linear regression models supported buffering capacity against acid episodes as an important mechanistic link between landscapes and trout populations. Although our models fail to predict trout densities precisely, their success at elucidating the mechanistic links between landscapes and trout populations, in concert with the importance of spatial variation, increases our understanding of factors affecting brook trout abundance and will help managers and private groups to protect and enhance populations of wild brook trout. ?? Copyright by the American Fisheries Society 2006.

  6. Gel electrophoresis of linear and star-branched DNA

    NASA Astrophysics Data System (ADS)

    Lau, Henry W.; Archer, Lynden A.

    2011-12-01

    The electrophoretic mobility of double-stranded DNA in polyacrylamide gel is investigated using an activated hopping model for the transport of a charged object within a heterogeneous medium. The model is premised upon a representation of the DNA path through the gel matrix as a series of traps with alternating large and small cross sections. Calculations of the trap dimensions from gel data show that the path imposes varying degrees of confinement upon migrating analytes, which retard their forward motion in a size-dependent manner. An expression derived for DNA mobility is shown to provide accurate predictions for the dynamics of linear DNA (67-622 bp) in gels of multiple concentrations. For star-branched DNA, the incorporation within the model of a length scale previously proposed to account for analyte architecture [Yuan , Anal. Chem.ANCHAM0003-270010.1021/ac060414w 78, 6179 (2006)] leads to mobility predictions that compare well with experimental results for a wide range of DNA shapes and molecular weights.

  7. Reinvestigating the surface and bulk electronic properties of Cd3As2

    NASA Astrophysics Data System (ADS)

    Roth, S.; Lee, H.; Sterzi, A.; Zacchigna, M.; Politano, A.; Sankar, R.; Chou, F. C.; Di Santo, G.; Petaccia, L.; Yazyev, O. V.; Crepaldi, A.

    2018-04-01

    Cd3As2 is widely considered among the few materials realizing the three-dimensional (3D) Dirac semimetal phase. Linearly dispersing states, responsible for the ultrahigh charge mobility, have been reported by several angle-resolved photoelectron spectroscopy (ARPES) investigations. However, in spite of the general agreement between these studies, some details are at odds. From scanning tunneling microscopy and optical experiments under magnetic field, a puzzling scenario emerges in which multiple states show linear dispersion at different energy scales. Here, we solve this apparent controversy by reinvestigating the electronic properties of the (112) surface of Cd3As2 by combining ARPES and theoretical calculations. We disentangle the presence of massive and massless metallic bulk and surface states, characterized by different symmetries. Our systematic experimental and theoretical study clarifies the complex band dispersion of Cd3As2 by extending the simplistic 3D Dirac semimetal model to account for multiple bulk and surface states crossing the Fermi level, and thus contributing to the unique material transport properties.

  8. First-principles study on stability of transition metal solutes in aluminum by analyzing the underlying forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wei; Xu, Yichun; Li, Xiangyan

    2015-05-07

    Although there have been some investigations on behaviors of solutes in metals under strain, the underlying mechanism of how strain changes the stability of a solute is still unknown. To gain such knowledge, first-principles calculations are performed on substitution energy of transition metal solutes in fcc Al host under rhombohedral strain (RS). Our results show that under RS, substitution energy decreases linearly with the increase of outermost d radius r{sub d} of the solute due to Pauli repulsion. The screened Coulomb interaction increases or decreases the substitution energy of a solute on condition that its Pauling electronegativity scale ϕ{sub P}more » is less or greater than that of Al under RS. This paper verifies a linear relation of substitution energy change versus r{sub d} and ϕ{sub P} under RS, which might be instructive for composition design of long life alloys serving in high stress condition.« less

  9. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND

    PubMed Central

    Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159

  10. A derating method for therapeutic applications of high intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-05-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  11. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.

    PubMed

    Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  12. Use of infrared spectroscopy for the determination of electronegativity of rare earth elements.

    PubMed

    Frost, Ray L; Erickson, Kristy L; Weier, Matt L; McKinnon, Adam R; Williams, Peter A; Leverett, Peter

    2004-07-01

    Infrared spectroscopy has been used to study a series of synthetic agardite minerals. Four OH stretching bands are observed at around 3568, 3482, 3362, and 3296 cm(-1). The first band is assigned to zeolitic, non-hydrogen-bonded water. The band at 3296 cm(-1) is assigned to strongly hydrogen-bonded water with an H bond distance of 2.72 A. The water in agardites is better described as structured water and not as zeolitic water. Two bands at around 999 and 975 cm(-1) are assigned to OH deformation modes. Two sets of AsO symmetric stretching vibrations were found and assigned to the vibrational modes of AsO(4) and HAsO(4) units. Linear relationships between positions of infrared bands associated with bonding to the OH units and the electronegativity of the rare earth elements were derived, with correlation coefficients >0.92. These linear functions were then used to calculate the electronegativity of Eu, for which a value of 1.1808 on the Pauling scale was found.

  13. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys.more » Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.« less

  14. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  15. Experimental and numerical simulation of passive decay heat removal by sump cooling after cool melt down

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knebel, J.U.; Kuhn, D.; Mueller, U.

    1997-12-01

    This article presents the basic physical phenomena and scaling criteria of passive decay heat removal from a large coolant pool by single-phase and two-phase natural circulation. The physical significance of the dimensionless similarity groups derived is evaluated. The above results are applied to the SUCO program that is performed at the Forschungszentrum Karlsruhe. The SUCO program is a three-step series of scaled model experiments investigating the possibility of a sump cooling concept for future light water reactors. The sump cooling concept is based on passive safety features within the containment. The work is supported by the German utilities and themore » Siemens AG. The article gives results of temperature and velocity measurements in the 1:20 linearly scaled SUCOS-2D test facility. The experiments are backed up by numerical calculations using the commercial software package Fluent. Finally, using the similarity analysis from above, the experimental results of the model geometry are scaled-up to the conditions in the prototype, allowing a first statement with regard to the feasibility of the sump cooling concept. 11 refs., 9 figs., 3 tabs.« less

  16. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games

    PubMed Central

    Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng

    2016-01-01

    Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840

  18. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  19. Investigation of the role of plasma wave cascading processes in the formation of midlatitude irregularities utilizing GPS and radar observations

    NASA Astrophysics Data System (ADS)

    Eltrass, A.; Scales, W. A.; Erickson, P. J.; Ruohoniemi, J. M.; Baker, J. B. H.

    2016-06-01

    Recent studies reveal that midlatitude ionospheric irregularities are less understood due to lack of models and observations that can explain the characteristics of the observed wave structures. In this paper, the cascading processes of both the temperature gradient instability (TGI) and the gradient drift instability (GDI) are investigated as the cause of these irregularities. Based on observations obtained during a coordinated experiment between the Millstone Hill incoherent scatter radar and the Blackstone Super Dual Auroral Radar Network radar, a time series for the growth rate of both TGI and GDI is calculated for observations in the subauroral ionosphere under both quiet and disturbed geomagnetic conditions. Recorded GPS scintillation data are analyzed to monitor the amplitude scintillations and to obtain the spectral characteristics of irregularities producing ionospheric scintillations. Spatial power spectra of the density fluctuations associated with the TGI from nonlinear plasma simulations are compared with both the GPS scintillation spectral characteristics and previous in situ satellite spectral measurements. The spectral comparisons suggest that initially, TGI or/and GDI irregularities are generated at large-scale size (kilometer scale), and the dissipation of the energy associated with these irregularities occurs by generating smaller and smaller (decameter scale) irregularities. The alignment between experimental, theoretical, and computational results of this study suggests that in spite of expectations from linear growth rate calculations, cascading processes involving TGI and GDI are likely responsible for the midlatitude ionospheric irregularities associated with GPS scintillations during disturbed times.

  20. Investigating the mechanism of aggregation of colloidal particles during electrophoretic deposition

    NASA Astrophysics Data System (ADS)

    Guelcher, Scott Arthur

    Charged particles deposited near an electrode aggregate to form ordered clusters in the presence of both dc and ac applied electric fields. The aggregation process could have important applications in areas such as coatings technology and ceramics processing. This thesis has sought to identify the phenomena driving the aggregation process. According to the electroosmotic flow developed by Solomentsev et al. (1997), aggregation in dc electric fields is caused by convection in the electroosmotic flow about deposited particles, and it is therefore an electrokinetic phenomenon which scales linearly with the electric field and the zeta-potential of the particles. Trajectories of pairs of particles aggregating to form doublets have been shown to scale linearly with the electric field and the zeta-potential of the particles, as predicted by the electroosmotic flow model. Furthermore, quantitative agreement has been demonstrated between the experimental and calculated trajectories for surface-to-surface separation distances between the particles ranging from one to two radii. The trajectories were calculated from the electroosmotic flow model with no fitting parameters; the only inputs to the model were the mobility of the deposited particles, the zeta- potential of the particles, and the applied electric field, all of which were measured independently. Clustering of colloidal particles deposited near an electrode in ac fields has also been observed, but a suitable model for the aggregation process has not been proposed and quantitative data in the literature are scarce. Trajectories of pairs of particles aggregating to form doublets in an ac field have been shown to scale with the root-mean-square (rms) electric field raised to the power 1.4 over the range of electric fields 10-35 V/cm (100-Hz sine and square waves). The aggregation is also frequency dependent; the doublets aggregate fastest at 30 Hz (square wave) and slowest at 500 Hz (square wave), while the interaction is repulsive at 1 kHz (square wave). The advantage of ac fields is that the process can operated at frequencies sufficiently high to avoid the negative effects of electrochemical reactions.

  1. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  2. Theoretical Infrared Spectra for Polycyclic Aromatic Hydrocarbon Neutrals, Cations and Anions

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.

    1995-01-01

    Calculations are carried out using density functional theory (DFT) to determine the harmonic frequencies and intensities of the neutrals and cations of thirteen polycyclic aromatic hydrocarbons (PAHs) up to the size of ovalene. Calculations are also carried out for a few PAH anions. The DFT harmonic frequencies, when uniformly scaled by the factor of 0.958 to account primarily for anharmonicity, agree with the matrix isolation fundamentals to within an average error of about 10 per centimeter. Electron correlation is found to significantly reduce the intensities of many of the cation harmonics, bringing them into much better agreement with the available experimental data. While the theoretical infrared spectra agree well with the experimental data for the neutral systems and for many of the cations, there are notable discrepancies with the experimental matrix isolation data for some PAH cations that are difficult to explain in terms of limitations in the calculations. In agreement with previous theoretical work, the present calculations show that the relative intensities for the astronomical unidentified infrared (UIR) bands agree reasonably well with those for a distribution of polycyclic aromatic hydrocarbon (PAH) cations, but not with a distribution of PAH neutrals. We also observe that the infrared spectra of highly symmetrical cations such as coronene agree much better with astronomical observations than do those of, for example, the polyacenes such as tetracene and pentacene. The total integrated intensities for the neutral species are found to increase linearly with size, while the total integrated intensities are much larger for the cations and scale more nearly quadratically with size. We conclude that emission from moderate-sized highly symmetric PAH cations such as coronene and larger could account for the UIR bands.

  3. Polychaete functional diversity in shallow habitats: Shelter from the storm

    NASA Astrophysics Data System (ADS)

    Wouters, Julia M.; Gusmao, Joao B.; Mattos, Gustavo; Lana, Paulo

    2018-05-01

    Innovative approaches are needed to help understanding how species diversity is related to the latitudinal gradient at large or small scales. We have applied a novel approach, by combining morphological and biological traits, to assess the relative importance of the large scale latitudinal gradient and regional morphodynamic drivers in shaping the functional diversity of polychaete assemblages in shallow water habitats, from exposed to estuarine sandy beaches. We used literature data on polychaetes from beaches along the southern and southeastern Brazilian coast together with data on beach types, slope, grain size, temperature, salinity, and chlorophyll a concentration. Generalized linear models on the FDis index for functional diversity calculated for each site and a combined RLQ and fourth-corner analysis were used to investigate relationships between functional traits and environmental variables. Functional diversity was not related to the latitudinal gradient but negatively correlated with grain size and beach slope. Functional diversity was highest in flat beaches with small grain size, little wave exposure and enhanced primary production, indicating that small scale morphodynamic conditions are the primary drivers of polychaete functional diversity.

  4. An extended harmonic balance method based on incremental nonlinear control parameters

    NASA Astrophysics Data System (ADS)

    Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.

    2017-02-01

    A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.

  5. Study of optical nonlinearities in Se-Te-Bi thin films

    NASA Astrophysics Data System (ADS)

    Sharma, Ambika; Yadav, Preeti; Kumari, Anshu

    2014-04-01

    The present work reports the nonlinear refractive index of Se85-xTe15Bix thin films calculated by Ticha and Tichy relation. The nonlinear refractive index of Chalcogenide amorphous semiconductor is well correlated with the linear refractive index and WDD parameters which in turn depend on the density and molar volume of the system. The density of the system is calculated theoretical as well as experimentally by using Archimedes principle. The linear refractive index and WDD parameters are calculated using single transmission spectra in the spectral range of 400-1500 nm. It is observed that linear as well as nonlinear refractive index increases with Bi content. The results are analyzed on the basis of increasing polarizability due to larger radii of Bi.

  6. The non-linear power spectrum of the Lyman alpha forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo

    2015-12-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less

  7. A computer program for the simulation of folds of different sizes under the influence of gravity

    NASA Astrophysics Data System (ADS)

    Vacas Peña, José M.; Martínez Catalán, José R.

    2004-02-01

    Folding&g is a computer program, based on the finite element method, developed to simulate the process of natural folding from small to large scales in two dimensions. Written in Pascal code and compiled with Borland Delphi 3.0, the program has a friendly interactive user interface and can be used for research as well as educational purposes. Four main menu options allow the user to import or to build and to save a model data file, select the type of graphic output, introduce and modify several physical parameters and enter the calculation routines. The program employs isoparametric, initially rectangular elements with eight nodes, which can sustain large deformations. The mathematical procedure is based on the elasticity equations, but has been modified to simulate a viscous rheology, either linear or of power-law type. The parameters to be introduced include either the linear viscosity, or, when the viscosity is non-linear, the material constant, activation energy, temperature and power of the differential stress. All the parameters can be set by rows, which simulate layers. A toggle permits gravity to be introduced into the calculations. In this case, the density of the different rows must be specified, and the sizes of the finite elements and of the whole model become meaningful. Viscosity values can also be assigned to blocks of several rows and columns, which permits the modelling of heterogeneities such as rectangular areas of high strength, which can be used to simulate shearing components interfering with the buckling process. The program is applied to several cases of folding, including a single competent bed and multilayers, and its results compared with analytical and experimental results. The influence of gravity is illustrated by the modelling of diapiric structures and of a large recumbent fold.

  8. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE PAGES

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    2017-04-26

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  9. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  10. Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift

    NASA Astrophysics Data System (ADS)

    Modarresi, N.; Rezakhah, S.

    The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).

  11. Is the pain visual analogue scale linear and responsive to change? An exploration using Rasch analysis.

    PubMed

    Kersten, Paula; White, Peter J; Tennant, Alan

    2014-01-01

    Pain visual analogue scales (VAS) are commonly used in clinical trials and are often treated as an interval level scale without evidence that this is appropriate. This paper examines the internal construct validity and responsiveness of the pain VAS using Rasch analysis. Patients (n = 221, mean age 67, 58% female) with chronic stable joint pain (hip 40% or knee 60%) of mechanical origin waiting for joint replacement were included. Pain was scored on seven daily VASs. Rasch analysis was used to examine fit to the Rasch model. Responsiveness (Standardized Response Means, SRM) was examined on the raw ordinal data and the interval data generated from the Rasch analysis. Baseline pain VAS scores fitted the Rasch model, although 15 aberrant cases impacted on unidimensionality. There was some local dependency between items but this did not significantly affect the person estimates of pain. Daily pain (item difficulty) was stable, suggesting that single measures can be used. Overall, the SRMs derived from ordinal data overestimated the true responsiveness by 59%. Changes over time at the lower and higher end of the scale were represented by large jumps in interval equivalent data points; in the middle of the scale the reverse was seen. The pain VAS is a valid tool for measuring pain at one point in time. However, the pain VAS does not behave linearly and SRMs vary along the trait of pain. Consequently, Minimum Clinically Important Differences using raw data, or change scores in general, are invalid as these will either under- or overestimate true change; raw pain VAS data should not be used as a primary outcome measure or to inform parametric-based Randomised Controlled Trial power calculations in research studies; and Rasch analysis should be used to convert ordinal data to interval data prior to data interpretation.

  12. Energy expenditure and wing beat frequency in relation to body mass in free flying Barn Swallows (Hirundo rustica).

    PubMed

    Schmidt-Wellenburg, Carola A; Biebach, Herbert; Daan, Serge; Visser, G Henk

    2007-04-01

    Many bird species steeply increase their body mass prior to migration. These fuel stores are necessary for long flights and to overcome ecological barriers. The elevated body mass is generally thought to cause higher flight costs. The relationship between mass and costs has been investigated mostly by interspecific comparison and by aerodynamic modelling. Here, we directly measured the energy expenditure of Barn Swallows (Hirundo rustica) flying unrestrained and repeatedly for several hours in a wind tunnel with natural variations in body mass. Energy expenditure during flight (e (f), in W) was found to increase with body mass (m, in g) following the equation e (f) = 0.38 x m (0.58). The scaling exponent (0.58) is smaller than assumed in aerodynamic calculations and than observed in most interspecific allometric comparisons. Wing beat frequency (WBF, in Hz) also scales with body mass (WBF = 2.4 x m (0.38)), but at a smaller exponent. Hence there is no linear relationship between e (f) and WBF. We propose that spontaneous changes in body mass during endurance flights are accompanied by physiological changes (such as enhanced oxygen and nutrient supply of the muscles) that are not taken into consideration in standard aerodynamic calculations, and also do not appear in interspecific comparison.

  13. Molcas 8: New capabilities for multiconfigurational quantum chemical calculations across the periodic table.

    PubMed

    Aquilante, Francesco; Autschbach, Jochen; Carlson, Rebecca K; Chibotaru, Liviu F; Delcey, Mickaël G; De Vico, Luca; Fdez Galván, Ignacio; Ferré, Nicolas; Frutos, Luis Manuel; Gagliardi, Laura; Garavelli, Marco; Giussani, Angelo; Hoyer, Chad E; Li Manni, Giovanni; Lischka, Hans; Ma, Dongxia; Malmqvist, Per Åke; Müller, Thomas; Nenov, Artur; Olivucci, Massimo; Pedersen, Thomas Bondo; Peng, Daoling; Plasser, Felix; Pritchard, Ben; Reiher, Markus; Rivalta, Ivan; Schapiro, Igor; Segarra-Martí, Javier; Stenrup, Michael; Truhlar, Donald G; Ungur, Liviu; Valentini, Alessio; Vancoillie, Steven; Veryazov, Valera; Vysotskiy, Victor P; Weingart, Oliver; Zapata, Felipe; Lindh, Roland

    2016-02-15

    In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization. © 2015 Wiley Periodicals, Inc.

  14. Low frequency acoustic waves from explosive sources in the atmosphere

    NASA Astrophysics Data System (ADS)

    Millet, Christophe; Robinet, Jean-Christophe; Roblin, Camille; Gloerfelt, Xavier

    2006-11-01

    In this study, a perturbative formulation of non linear euler equations is used to compute the pressure variation for low frequency acoustic waves from explosive sources in real atmospheres. Based on a Dispersion-Relation-Preserving (DRP) finite difference scheme, the discretization provides good properties for both sound generation and long range sound propagation over a variety of spatial atmospheric scales. It also assures that there is no wave mode coupling in the numerical simulation The background flow is obtained by matching the comprehensive empirical global model of horizontal winds HWM-93 (and MSISE-90 for the temperature profile) with meteorological reanalysis of the lower atmosphere. Benchmark calculations representing cases where there is downward and upward refraction (including shadow zones), ducted propagation, and generation of acoustic waves from low speed shear layers are considered for validation. For all cases, results show a very good agreement with analytical solutions, when available, and with other standard approaches, such as the ray tracing and the normal mode technique. Comparison of calculations and experimental data from the high explosive ``Misty Picture'' test that provided the scaled equivalent airblast of an 8 kt nuclear device (on May 14, 1987), is also considered. It is found that instability waves develop less than one hour after the wavefront generated by the detonation passes.

  15. Clinical application of the pO(2)-pCO(2) diagram.

    PubMed

    Paulev, P-E; Siggaard-Andersen, O

    2004-10-01

    Based on the classic, linear blood gas diagram a logarithmic blood gas map was constructed. The scales were extended by the use of logarithmic axes in order to allow for high patient values. Patients with lung disorders often have high arterial carbon dioxide tensions, and patients on supplementary oxygen typically respond with high oxygen tensions off the scale of the classic diagram. Two case histories illustrate the clinical application of the logarithmic blood gas map. Variables from the two patients were measured by the use of blood gas analysis equipment. Measured and calculated values are tabulated. The calculations were performed using the oxygen status algorithm. When interpreting the graph for a given patient it is recommended first to observe the location of the marker for the partial pressure of oxygen in inspired, humidified air (I) to see whether the patient is breathing atmospheric air or air with supplementary oxygen. Then observe the location of the arterial point (a) to see whether hypoxemia or hypercapnia appears to be the primary disturbance. Finally observe the alveolo-arterial oxygen tension difference to estimate the degree of veno-arterial shunting. If the mixed venous point (v) is available, then observe the value of the mixed venous oxygen tension. This is the most important indicator of global tissue hypoxia.

  16. Impurities in a non-axisymmetric plasma. Transport and effect on bootstrap current

    DOE PAGES

    Mollén, A.; Landreman, M.; Smith, H. M.; ...

    2015-11-20

    Impurities cause radiation losses and plasma dilution, and in stellarator plasmas the neoclassical ambipolar radial electric field is often unfavorable for avoiding strong impurity peaking. In this work we use a new continuum drift-kinetic solver, the SFINCS code (the Stellarator Fokker-Planck Iterative Neoclassical Conservative Solver) [M. Landreman et al., Phys. Plasmas 21 (2014) 042503] which employs the full linearized Fokker-Planck-Landau operator, to calculate neoclassical impurity transport coefficients for a Wendelstein 7-X (W7-X) magnetic configuration. We compare SFINCS calculations with theoretical asymptotes in the high collisionality limit. We observe and explain a 1/nu-scaling of the inter-species radial transport coefficient at lowmore » collisionality, arising due to the field term in the inter-species collision operator, and which is not found with simplified collision models even when momentum correction is applied. However, this type of scaling disappears if a radial electric field is present. We use SFINCS to analyze how the impurity content affects the neoclassical impurity dynamics and the bootstrap current. We show that a change in plasma effective charge Z eff of order unity can affect the bootstrap current enough to cause a deviation in the divertor strike point locations.« less

  17. Algebraic approach to electronic spectroscopy and dynamics.

    PubMed

    Toutounji, Mohamad

    2008-04-28

    Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponential operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a(+). While exp(a(+)) translates coherent states, exp(a(+)a(+)) operation on coherent states has always been a challenge, as a(+) has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F(tau(1),tau(2),tau(3),tau(4)), of which the optical nonlinear response function may be procured, as evaluating F(tau(1),tau(2),tau(3),tau(4)) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.

  18. Analytic prediction of baryonic effects from the EFT of large scale structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu

    2015-05-01

    The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less

  19. Comparison between a Weibull proportional hazards model and a linear model for predicting the genetic merit of US Jersey sires for daughter longevity.

    PubMed

    Caraviello, D Z; Weigel, K A; Gianola, D

    2004-05-01

    Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.

  20. Visual analog scale (VAS) for assessment of acute mountain sickness (AMS) on Aconcagua.

    PubMed

    Van Roo, Jon D; Lazio, Matthew P; Pesce, Carlos; Malik, Sanjeev; Courtney, D Mark

    2011-03-01

    The Lake Louise AMS Self-Report Score (LLSelf) is a commonly used, validated assessment of acute mountain sickness (AMS). We compared LLSelf and visual analog scales (VAS) to quantify AMS on Aconcagua (6962 m). Prospective observational cohort study at Plaza de Mulas base camp (4365 m), Aconcagua Provincial Park, Argentina. Volunteers climbing in January 2009 were enrolled at base camp and ascended at their own pace. They completed the LLSelf, an overall VAS [VAS(o)], and 5 individual VAS [VAS(i)] corresponding to the items of the LLSelf when symptoms were maximal. Composite VAS [VAS(c)] was calculated as the sum of the 5 VAS(i). A total of 127 volunteers consented to the study. Response rate was 52.0%. AMS occurred in 77.3% of volunteers, while 48.5% developed severe AMS. Median (interquartile range, IQR) LLSelf was 4 (3-7). Median (IQR) VAS(o) was 36 mm (23-59). VAS(o) was linear and correlated with LLSelf: slope = 6.7 (95% CI: 4.4-9.0), intercept = 3.0 (95% CI: -10.0-16.1), ρ = 0.71, τ = 0.55, R(2) = 0.45, p < 0.001. Median (IQR) VAS(c) was 29 (13-44). VAS(c) was also linear and correlated with LLSelf: slope = 5.9 (95% CI: 4.9-6.9), intercept = -0.6 (95% CI: -6.3-5.1), ρ = 0.83, τ = 0.68, R(2) = 0.73, p < 0.001. The relationship between the 5 VAS(i) and LLSelf(i) was less significant and less linear than that between VAS(o), VAS(c), and LLSelf. While both VAS(o) and VAS(c) for assessment of AMS appear to be linear with respect to LLSelf, the amount of scatter within the VAS is considerable. The LLSelf remains the gold standard for the diagnosis of AMS. Copyright © 2011 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  1. The symmetric quartic map for trajectories of magnetic field lines in elongated divertor tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Morgin; Wadi, Hasina; Ali, Halima

    The coordinates of the area-preserving map equations for integration of magnetic field line trajectories in divertor tokamaks can be any coordinates for which a transformation to ({psi}{sub t},{theta},{phi}) coordinates exists [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. {psi}{sub t} is toroidal magnetic flux, {theta} is poloidal angle, and {phi} is toroidal angle. This freedom is exploited to construct the symmetric quartic map such that the only parameter that determines magnetic geometry is the elongation of the separatrix surface. The poloidal flux inside the separatrix, the safety factor as a function of normalizedmore » minor radius, and the magnetic perturbation from the symplectic discretization are all held constant, and only the elongation is {kappa} varied. The width of stochastic layer, the area, and the fractal dimension of the magnetic footprint and the average radial diffusion coefficient of magnetic field lines from the stochastic layer; and how these quantities scale with {kappa} is calculated. The symmetric quartic map gives the correct scalings which are consistent with the scalings of coordinates with {kappa}. The effects of m=1, n={+-}1 internal perturbation with the amplitude that is expected to occur in tokamaks are calculated by adding a term [H. Ali, A. Punjabi, A. H. Boozer, and T. Evans, Phys. Plasmas 11, 1908 (2004)] to the symmetric quartic map. In this case, the width of stochastic layer scales as 0.35 power of {kappa}. The area of the footprint is roughly constant. The average radial diffusion coefficient of field lines near the X-point scales linearly with {kappa}. The low mn perturbation changes the quasisymmetric structure of the footprint, and reorganizes it into a single, large scale, asymmetric structure. The symmetric quartic map is combined with the dipole map [A. Punjabi, H. Ali, and A. H. Boozer, Phys. Plasmas 10, 3992 (2003)] to calculate the effects of magnetic perturbation from a current carrying coil. The coil position and coil current coil are constant. The dipole perturbation enhances the magnetic shear. The width of the stochastic layer scales exponentially with {kappa}. The area of the footprint decreases as the {kappa} increases. The radial diffusion coefficient of field lines scales exponentially with {kappa}. The dipole perturbation changes the topology of the footprint. It breaks up the toroidally spiraling footprint into a number of separate asymmetric toroidal strips. Practical applications of the symmetric quartic map to elongated divertor tokamak plasmas are suggested.« less

  2. The symmetric quartic map for trajectories of magnetic field lines in elongated divertor tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Jones, Morgin; Wadi, Hasina; Ali, Halima; Punjabi, Alkesh

    2009-04-01

    The coordinates of the area-preserving map equations for integration of magnetic field line trajectories in divertor tokamaks can be any coordinates for which a transformation to (ψt,θ,φ) coordinates exists [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. ψt is toroidal magnetic flux, θ is poloidal angle, and φ is toroidal angle. This freedom is exploited to construct the symmetric quartic map such that the only parameter that determines magnetic geometry is the elongation of the separatrix surface. The poloidal flux inside the separatrix, the safety factor as a function of normalized minor radius, and the magnetic perturbation from the symplectic discretization are all held constant, and only the elongation is κ varied. The width of stochastic layer, the area, and the fractal dimension of the magnetic footprint and the average radial diffusion coefficient of magnetic field lines from the stochastic layer; and how these quantities scale with κ is calculated. The symmetric quartic map gives the correct scalings which are consistent with the scalings of coordinates with κ. The effects of m =1, n =±1 internal perturbation with the amplitude that is expected to occur in tokamaks are calculated by adding a term [H. Ali, A. Punjabi, A. H. Boozer, and T. Evans, Phys. Plasmas 11, 1908 (2004)] to the symmetric quartic map. In this case, the width of stochastic layer scales as 0.35 power of κ. The area of the footprint is roughly constant. The average radial diffusion coefficient of field lines near the X-point scales linearly with κ. The low mn perturbation changes the quasisymmetric structure of the footprint, and reorganizes it into a single, large scale, asymmetric structure. The symmetric quartic map is combined with the dipole map [A. Punjabi, H. Ali, and A. H. Boozer, Phys. Plasmas 10, 3992 (2003)] to calculate the effects of magnetic perturbation from a current carrying coil. The coil position and coil current coil are constant. The dipole perturbation enhances the magnetic shear. The width of the stochastic layer scales exponentially with κ. The area of the footprint decreases as the κ increases. The radial diffusion coefficient of field lines scales exponentially with κ. The dipole perturbation changes the topology of the footprint. It breaks up the toroidally spiraling footprint into a number of separate asymmetric toroidal strips. Practical applications of the symmetric quartic map to elongated divertor tokamak plasmas are suggested.

  3. A versatile program for the calculation of linear accelerator room shielding.

    PubMed

    Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M

    2018-03-22

    This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.

  4. Linear Transformation Method for Multinuclide Decay Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding Yuan

    2010-12-29

    A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.

  5. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    PubMed

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  6. A redshift survey of IRAS galaxies. V - The acceleration on the Local Group

    NASA Technical Reports Server (NTRS)

    Strauss, Michael A.; Yahil, Amos; Davis, Marc; Huchra, John P.; Fisher, Karl

    1992-01-01

    The acceleration on the Local Group is calculated based on a full-sky redshift survey of 5288 galaxies detected by IRAS. A formalism is developed to compute the distribution function of the IRAS acceleration for a given power spectrum of initial perturbations. The computed acceleration on the Local Group points 18-28 deg from the direction of the Local Group peculiar velocity vector. The data suggest that the CMB dipole is indeed due to the motion of the Local Group, that this motion is gravitationally induced, and that the distribution of IRAS galaxies on large scales is related to that of dark matter by a simple linear biasing model.

  7. Calculation of the coherent synchrotron radiation impedance from a wiggler

    NASA Astrophysics Data System (ADS)

    Wu, Juhao; Raubenheimer, Tor O.; Stupakov, Gennady V.

    2003-04-01

    Most studies of coherent synchrotron radiation (CSR) have considered only the radiation from independent dipole magnets. However, in the damping rings of future linear colliders, a large fraction of the radiation power will be emitted in damping wigglers. In this paper, the longitudinal wakefield and impedance due to CSR in a wiggler are derived in the limit of a large wiggler parameter K. After an appropriate scaling, the results can be expressed in terms of universal functions, which are independent of K. Analytical asymptotic results are obtained for the wakefield in the limit of large and small distances, and for the impedance in the limit of small and high frequencies.

  8. Relationship of ultrasound signal intensity with SonoVue concentration at body temperature in vitro

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Li, Jing; He, Xiaoling; Wu, Kaizhi; Yuan, Yun; Ding, Mingyue

    2014-04-01

    In this paper, the relationship between image intensity and ultrasound contrast agent (UCA) concentration is investigated. Experiments are conducted in water bath using a silicon tube filled with UCA (SonoVue) at different concentrations (100μl/l to 6000μl/l) at around 37 °C to simulate the temperature in human body. The mean gray-scale intensity within the region of interest (ROI) is calculated to obtain the plot of signal intensity to UCA concentration. The results show that the intensity firstly exhibits a linear increase to the peak at approximately 1500μl/l then appears a downward trend due to the multiple scattering (MS) effects.

  9. Finite Larmor radius effects on the (m = 2, n = 1) cylindrical tearing mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.; Chowdhury, J.; Parker, S. E.

    2015-04-15

    New field solvers are developed in the gyrokinetic code GEM [Chen and Parker, J. Comput. Phys. 220, 839 (2007)] to simulate low-n modes. A novel discretization is developed for the ion polarization term in the gyrokinetic vorticity equation. An eigenmode analysis with finite Larmor radius effects is developed to study the linear resistive tearing mode. The mode growth rate is shown to scale with resistivity as γ ∼ η{sup 1∕3}, the same as the semi-collisional regime in previous kinetic treatments [Drake and Lee, Phys. Fluids 20, 1341 (1977)]. Tearing mode simulations with gyrokinetic ions are verified with the eigenmode calculation.

  10. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  11. Allometry of animal-microbe interactions and global census of animal-associated microbes.

    PubMed

    Kieft, Thomas L; Simmons, Karen A

    2015-07-07

    Animals live in close association with microorganisms, mostly prokaryotes, living in or on them as commensals, mutualists or parasites, and profoundly affecting host fitness. Most animal-microbe studies focus on microbial community structure; for this project, allometry (scaling of animal attributes with animal size) was applied to animal-microbe relationships across a range of species spanning 12 orders of magnitude in animal mass, from nematodes to whales. Microbial abundances per individual animal were gleaned from published literature and also microscopically counted in three species. Abundance of prokaryotes/individual versus animal mass scales as a nearly linear power function (exponent = 1.07, R(2) = 0.94). Combining this power function with allometry of animal abundance indicates that macrofauna have an outsized share of animal-associated microorganisms. The total number of animal-associated prokaryotes in Earth's land animals was calculated to be 1.3-1.4 × 10(25) cells and the total of marine animal-associated microbes was calculated to be 8.6-9.0 × 10(24) cells. Animal-associated microbes thus total 2.1-2.3 × 10(25) of the approximately 10(30) prokaryotes on the Earth. Microbes associated with humans comprise 3.3-3.5% of Earth's animal-associated microbes, and domestic animals harbour 14-20% of all animal-associated microbes, adding a new dimension to the scale of human impact on the biosphere. This novel allometric power function may reflect underlying mechanisms involving the transfer of energy and materials between microorganisms and their animal hosts. Microbial diversity indices of animal gut communities and gut microbial species richness for 60 mammals did not indicate significant scaling relationships with animal body mass; however, further research in this area is warranted. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  12. Resonance behaviour of whole-body averaged specific energy absorption rate (SAR) in the female voxel model, NAOMI

    NASA Astrophysics Data System (ADS)

    Dimbylow, Peter

    2005-09-01

    Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.

  13. Resonance behaviour of whole-body averaged specific energy absorption rate (SAR) in the female voxel model, NAOMI.

    PubMed

    Dimbylow, Peter

    2005-09-07

    Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.

  14. Localized-overlap approach to calculations of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Rob, Fazle

    Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.

  15. Assessment of Health-Cost Externalities of Air Pollution at the National Level using the EVA Model System

    NASA Astrophysics Data System (ADS)

    Brandt, Jørgen; Silver, Jeremy David; Heile Christensen, Jesper; Skou Andersen, Mikael; Geels, Camilla; Gross, Allan; Buus Hansen, Ayoe; Mantzius Hansen, Kaj; Brandt Hedegaard, Gitte; Ambelas Skjøth, Carsten

    2010-05-01

    Air pollution has significant negative impacts on human health and well-being, which entail substantial economic consequences. We have developed an integrated model system, EVA (External Valuation of Air pollution), to assess health-related economic externalities of air pollution resulting from specific emission sources/sectors. The EVA system was initially developed to assess externalities from power production, but in this study it is extended to evaluate costs at the national level. The EVA system integrates a regional-scale atmospheric chemistry transport model (DEHM), address-level population data, exposure-response functions and monetary values applicable for Danish/European conditions. Traditionally, systems that assess economic costs of health impacts from air pollution assume linear approximations in the source-receptor relationships. However, atmospheric chemistry is non-linear and therefore the uncertainty involved in the linear assumption can be large. The EVA system has been developed to take into account the non-linear processes by using a comprehensive, state-of-the-art chemical transport model when calculating how specific changes to emissions affect air pollution levels and the subsequent impacts on human health and cost. Furthermore, we present a new "tagging" method, developed to examine how specific emission sources influence air pollution levels without assuming linearity of the non-linear behaviour of atmospheric chemistry. This method is more precise than the traditional approach based on taking the difference between two concentration fields. Using the EVA system, we have estimated the total external costs from the main emission sectors in Denmark, representing the ten major SNAP codes. Finally, we assess the impacts and external costs of emissions from international ship traffic around Denmark, since there is a high volume of ship traffic in the region.

  16. A Monte Carlo calculation model of electronic portal imaging device for transit dosimetry through heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh

    2016-05-15

    Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less

  17. The cross-over to magnetostrophic convection in planetary dynamo systems

    PubMed Central

    King, E. M.

    2017-01-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338

  18. The cross-over to magnetostrophic convection in planetary dynamo systems.

    PubMed

    Aurnou, J M; King, E M

    2017-03-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.

  19. First-principles study of electronic structure and Fermi surface in semimetallic YAs

    DOE PAGES

    Swatek, Przemys?aw Wojciech

    2018-03-23

    In the course of searching for new systems, which exhibit nonsaturating and extremely large positive magnetoresistance, electronic structure, Fermi surface, and de Haas-van Alphen characteristics of the semimetallic YAs compound were studied using the all-electron full-potential linearized augmented-plane wave (FP–LAPW) approach in the framework of the generalized gradient approximation (GGA). In the scalar-relativistic calculation, the cubic symmetry splits fivefold degenerate Y- d orbital into low-energy threefold-degenerate and twofold degenerate doublet states at point around the Fermi energy. Furthermore one of them, together with the threefold degenerate character of As-p orbital, render the YAs semimetal with a topologically trivial band ordermore » and fairly low density of states at the Fermi level. Including spin–orbit (SO) coupling into the calculation leads to pronounced splitting of the state and shifting the bands in the energy scale. Consequently, the determined four different 3-dimensional Fermi surface sheets of YAs consists of three concentric hole-like bands at and one ellipsoidal electron-like sheet centred at the X points. In full accordance with the previous first-principles calculations for isostructural YSb and YBi, the calculated Fermi surface of YAs originates from fairly compensated multi-band electronic structures.« less

  20. Steady Boundary Layer Disturbances Created By Two-Dimensional Surface Ripples

    NASA Astrophysics Data System (ADS)

    Kuester, Matthew

    2017-11-01

    Multiple experiments have shown that surface roughness can enhance the growth of Tollmien-Schlichting (T-S) waves in a laminar boundary layer. One of the common observations from these studies is a ``wall displacement'' effect, where the boundary layer profile shape remains relatively unchanged, but the origin of the profile pushes away from the wall. The objective of this work is to calculate the steady velocity field (including this wall displacement) of a laminar boundary layer over a surface with small, 2D surface ripples. The velocity field is a combination of a Blasius boundary layer and multiple disturbance modes, calculated using the linearized Navier-Stokes equations. The method of multiple scales is used to include non-parallel boundary layer effects of O (Rδ- 1) ; the non-parallel terms are necessary, because a wall displacement is mathematically inconsistent with a parallel boundary layer assumption. This technique is used to calculate the steady velocity field over ripples of varying height and wavelength, including cases where a separation bubble forms on the leeward side of the ripple. In future work, the steady velocity field will be the input for stability calculations, which will quantify the growth of T-S waves over rough surfaces. The author would like to acknowledge the support of the Kevin T. Crofton Aerospace & Ocean Engineering Department at Virginia Tech.

  1. First-principles study of electronic structure and Fermi surface in semimetallic YAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swatek, Przemys?aw Wojciech

    In the course of searching for new systems, which exhibit nonsaturating and extremely large positive magnetoresistance, electronic structure, Fermi surface, and de Haas-van Alphen characteristics of the semimetallic YAs compound were studied using the all-electron full-potential linearized augmented-plane wave (FP–LAPW) approach in the framework of the generalized gradient approximation (GGA). In the scalar-relativistic calculation, the cubic symmetry splits fivefold degenerate Y- d orbital into low-energy threefold-degenerate and twofold degenerate doublet states at point around the Fermi energy. Furthermore one of them, together with the threefold degenerate character of As-p orbital, render the YAs semimetal with a topologically trivial band ordermore » and fairly low density of states at the Fermi level. Including spin–orbit (SO) coupling into the calculation leads to pronounced splitting of the state and shifting the bands in the energy scale. Consequently, the determined four different 3-dimensional Fermi surface sheets of YAs consists of three concentric hole-like bands at and one ellipsoidal electron-like sheet centred at the X points. In full accordance with the previous first-principles calculations for isostructural YSb and YBi, the calculated Fermi surface of YAs originates from fairly compensated multi-band electronic structures.« less

  2. Image quality of mean temporal arterial and mean temporal portal venous phase images calculated from low dose dynamic volume perfusion CT datasets in patients with hepatocellular carcinoma and pancreatic cancer.

    PubMed

    Wang, X; Henzler, T; Gawlitza, J; Diehl, S; Wilhelm, T; Schoenberg, S O; Jin, Z Y; Xue, H D; Smakic, A

    2016-11-01

    Dynamic volume perfusion CT (dVPCT) provides valuable information on tissue perfusion in patients with hepatocellular carcinoma (HCC) and pancreatic cancer. However, currently dVPCT is often performed in addition to conventional CT acquisitions due to the limited morphologic image quality of dose optimized dVPCT protocols. The aim of this study was to prospectively compare objective and subjective image quality, lesion detectability and radiation dose between mean temporal arterial (mTA) and mean temporal portal venous (mTPV) images calculated from low dose dynamic volume perfusion CT (dVPCT) datasets with linearly blended 120-kVp arterial and portal venous datasets in patients with HCC and pancreatic cancer. All patients gave written informed consent for this institutional review board-approved HIPAA compliant study. 27 consecutive patients (18 men, 9 women, mean age, 69.1 years±9.4) with histologically proven HCC or suspected pancreatic cancer were prospectively enrolled. The study CT protocol included a dVPCT protocol performed with 70 or 80kVp tube voltage (18 spiral acquisitions, 71.2s total acquisition times) and standard dual-energy (90/150kVpSn) arterial and portal venous acquisition performed 25min after the dVPCT. The mTA and mTPV images were manually reconstructed from the 3 to 5 best visually selected single arterial and 3 to 5 best single portal venous phases dVPCT dataset. The linearly blended 120-kVp images were calculated from dual-energy CT (DECT) raw data. Image noise, SNR, and CNR of the liver, abdominal aorta (AA) and main portal vein (PV) were compared between the mTA/mTPV and the linearly blended 120-kVp dual-energy arterial and portal venous datasets, respectively. Subjective image quality was evaluated by two radiologists regarding subjective image noise, sharpness and overall diagnostic image quality using a 5-point Likert Scale. In addition, liver lesion detectability was performed for each liver segment by the two radiologists using the linearly blended120-kVp arterial and portal venous datasets as the reference standard. Image noise, SNR and CNR values of the mTA and mTPV were significantly higher when compared to the corresponding linearly blended arterial and portal venous 120-kVp datasets (all p<0.001) except for image noise within the PV in the portal venous phases (p=0.136). image quality of mTA and mTPV were rated significantly better when compared to the linearly blended 120-kVp arterial and portal venous datasets. Both readers were able to detect all liver lesions found on the linearly blended 120-kVp arterial and portal venous datasets using the mTA and mTPV datasets. The effective radiation dose of the dVPCT was 27.6mSv for the 80kVp protocol and 14.5mSv for the 70kVp protocol. The mean effective radiation dose for the linearly blended 120-kVp arterial and portal venous CT protocol together of the upper abdomen was 5.60mSv±1.48mSv. Our preliminary data suggest that subjective and objective image quality of mTA and mTPV datasets calculated from low-kVp dVPCT datasets is non-inferior when compared to linearly blended 120-kVp arterial and portal venous acquisitions in patients with HCC and pancreatic cancer. Thus, dVPCT could be used as a stand-alone imaging technique without additionally performed conventional arterial and portal venous CT acquisitions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Effect of Body Weight on Echocardiographic Measurements in 19,866 Pure-Bred Cats with or without Heart Disease.

    PubMed

    Häggström, J; Andersson, Å O; Falk, T; Nilsfors, L; OIsson, U; Kresken, J G; Höglund, K; Rishniw, M; Tidholm, A; Ljungvall, I

    2016-09-01

    Echocardiography is a cost-efficient method to screen cats for presence of heart disease. Current reference intervals for feline cardiac dimensions do not account for body weight (BW). To study the effect of BW on heart rate (HR), aortic (Ao), left atrial (LA) and ventricular (LV) linear dimensions in cats, and to calculate 95% prediction intervals for these variables in normal adult pure-bred cats. 19 866 pure-bred cats. Clinical data from heart screens conducted between 1999 and 2014 were included. Associations between BW, HR, and cardiac dimensions were assessed using univariate linear models and allometric scaling, including all cats, and only those considered normal, respectively. Prediction intervals were created using 95% confidence intervals obtained from regression curves. Associations between BW and echocardiographic dimensions were best described by allometric scaling, and all dimensions increased with increasing BW (all P<0.001). Strongest associations were found between BW and Ao, LV end diastolic, LA dimensions, and thickness of LV free wall. Weak linear associations were found between BW and HR and left atrial to aortic ratio (LA:Ao), for which HR decreased with increasing BW (P<0.001), and LA:Ao increased with increasing BW (P<0.001). Marginal differences were found for prediction formulas and prediction intervals when the dataset included all cats versus only those considered normal. BW had a clinically relevant effect on echocardiographic dimensions in cats, and BW based 95% prediction intervals may help in screening cats for heart disease. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  4. Information content of MOPITT CO profile retrievals: Temporal and geographical variability

    NASA Astrophysics Data System (ADS)

    Deeter, M. N.; Edwards, D. P.; Gille, J. C.; Worden, H. M.

    2015-12-01

    Satellite measurements of tropospheric carbon monoxide (CO) enable a wide array of applications including studies of air quality and pollution transport. The MOPITT (Measurements of Pollution in the Troposphere) instrument on the Earth Observing System Terra platform has been measuring CO concentrations globally since March 2000. As indicated by the Degrees of Freedom for Signal (DFS), the standard metric for trace-gas retrieval information content, MOPITT retrieval performance varies over a wide range. We show that both instrumental and geophysical effects yield significant geographical and temporal variability in MOPITT DFS values. Instrumental radiance uncertainties, which describe random errors (or "noise") in the calibrated radiances, vary over long time scales (e.g., months to years) and vary between the four detector elements of MOPITT's linear detector array. MOPITT retrieval performance depends on several factors including thermal contrast, fine-scale variability of surface properties, and CO loading. The relative importance of these various effects is highly variable, as demonstrated by analyses of monthly mean DFS values for the United States and the Amazon Basin. An understanding of the geographical and temporal variability of MOPITT retrieval performance is potentially valuable to data users seeking to limit the influence of the a priori through data filtering. To illustrate, it is demonstrated that calculated regional-average CO mixing ratios may be improved by excluding observations from a subset of pixels in MOPITT's linear detector array.

  5. A linear and nonlinear study of Mira

    NASA Astrophysics Data System (ADS)

    Cox, A. N.; Ostlie, D. A.

    1993-12-01

    Both linear and nonlinear calculations of the 331 day, long period variable star Mira have been undertaken to see what radial pulsation mode is naturally selected. Models are similar to those considered in the linear nonadiabatic stellar pulsation study of Ostlie and Cox (1986). Models are considered with masses near one solar mass, luminosities between 4000 and 5000 solar luminosities, and effective temperatures of approximately 3000 K. These models have fundamental mode periods that closely match the pulsation period of Mira. The equation of state for the stellar material is given by the Stellingwerf (1975ab) procedure, and the opacity is obtained from a fit by Cahn that matches the low temperature molecular absorption data for the poplulation I Ross-Aller 1 mixture calculated from the Los Alamos Astrophysical Opacity Library. For the linear study, the Cox, Brownlee, and Eilers (1966) approximation is used for the linear theory variation of the convection luminosity. For the nonlinear work, the method described by Ostlie (1990) and Cox (1990) is followed. Results showing internal details of the radial fundamental and first overtone modes behavior in linear theory are presented. Preliminary radial fundamental mode nonlinear calculations are discussed. The very tentative conclusion is that neither the fundamental or first overtone mode is excluded from being the actual observed one.

  6. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  7. The small-scale dynamo: breaking universality at high Mach numbers

    NASA Astrophysics Data System (ADS)

    Schleicher, Dominik R. G.; Schober, Jennifer; Federrath, Christoph; Bovino, Stefano; Schmidt, Wolfram

    2013-02-01

    The small-scale dynamo plays a substantial role in magnetizing the Universe under a large range of conditions, including subsonic turbulence at low Mach numbers, highly supersonic turbulence at high Mach numbers and a large range of magnetic Prandtl numbers Pm, i.e. the ratio of kinetic viscosity to magnetic resistivity. Low Mach numbers may, in particular, lead to the well-known, incompressible Kolmogorov turbulence, while for high Mach numbers, we are in the highly compressible regime, thus close to Burgers turbulence. In this paper, we explore whether in this large range of conditions, universal behavior can be expected. Our starting point is previous investigations in the kinematic regime. Here, analytic studies based on the Kazantsev model have shown that the behavior of the dynamo depends significantly on Pm and the type of turbulence, and numerical simulations indicate a strong dependence of the growth rate on the Mach number of the flow. Once the magnetic field saturates on the current amplification scale, backreactions occur and the growth is shifted to the next-larger scale. We employ a Fokker-Planck model to calculate the magnetic field amplification during the nonlinear regime, and find a resulting power-law growth that depends on the type of turbulence invoked. For Kolmogorov turbulence, we confirm previous results suggesting a linear growth of magnetic energy. For more general turbulent spectra, where the turbulent velocity scales with the characteristic length scale as uℓ∝ℓϑ, we find that the magnetic energy grows as (t/Ted)2ϑ/(1-ϑ), with t being the time coordinate and Ted the eddy-turnover time on the forcing scale of turbulence. For Burgers turbulence, ϑ = 1/2, quadratic rather than linear growth may thus be expected, as the spectral energy increases from smaller to larger scales more rapidly. The quadratic growth is due to the initially smaller growth rates obtained for Burgers turbulence. Similarly, we show that the characteristic length scale of the magnetic field grows as t1/(1-ϑ) in the general case, implying t3/2 for Kolmogorov and t2 for Burgers turbulence. Overall, we find that high Mach numbers, as typically associated with steep spectra of turbulence, may break the previously postulated universality, and introduce a dependence on the environment also in the nonlinear regime.

  8. Molecular structure, vibrational spectroscopic (FT-IR, FT-Raman), UV-vis spectra, first order hyperpolarizability, NBO analysis, HOMO and LUMO analysis, thermodynamic properties of benzophenone 2,4-dicarboxylic acid by ab initio HF and density functional method

    NASA Astrophysics Data System (ADS)

    Chaitanya, K.

    2012-02-01

    The FT-IR (4000-450 cm -1) and FT-Raman spectra (3500-100 cm -1) of benzophenone 2,4-dicarboxylic acid (2,4-BDA) have been recorded in the condensed state. Density functional theory calculation with B3LYP/6-31G(d,p) basis set have been used to determine ground state molecular geometries (bond lengths and bond angles), harmonic vibrational frequencies, infrared intensities, Raman activities and bonding features of the title compounds. The assignments of the vibrational spectra have been carried out with the help of normal co-ordinate analysis (NCA) following the scaled quantum mechanical force field (SQMFF) methodology. The first order hyperpolarizability ( β0) and related properties ( β, α0 and Δ α) of 2,4-BDA is calculated using HF/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO analysis. The calculated first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charges is also calculated. Because of vibrational analyses, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-vis spectra and electronic absorption properties were explained and illustrated from the frontier molecular orbitals.

  9. Progressive Mid-latitude Afforestation: Local and Remote Climate Impacts in the Framework of Two Coupled Earth System Models

    NASA Astrophysics Data System (ADS)

    Lague, Marysa

    Vegetation influences the atmosphere in complex and non-linear ways, such that large-scale changes in vegetation cover can drive changes in climate on both local and global scales. Large-scale land surface changes have been shown to introduce excess energy to one hemisphere, causing a shift in atmospheric circulation on a global scale. However, past work has not quantified how the climate response scales with the area of vegetation. Here, we systematically evaluate the response of climate to linearly increasing the area of forest cover over the northern mid-latitudes. We show that the magnitude of afforestation of the northern mid-latitudes determines the climate response in a non-linear fashion, and identify a threshold in vegetation-induced cloud feedbacks - a concept not previously addressed by large-scale vegetation manipulation experiments. Small increases in tree cover drive compensating cloud feedbacks, while latent heat fluxes reach a threshold after sufficiently large increases in tree cover, causing the troposphere to warm and dry, subsequently reducing cloud cover. Increased absorption of solar radiation at the surface is driven by both surface albedo changes and cloud feedbacks. We identify how vegetation-induced changes in cloud cover further feedback on changes in the global energy balance. We also show how atmospheric cross-equatorial energy transport changes as the area of afforestation is incrementally increased (a relationship which has not previously been demonstrated). This work demonstrates that while some climate effects (such as energy transport) of large scale mid-latitude afforestation scale roughly linearly across a wide range of afforestation areas, others (such as the local partitioning of the surface energy budget) are non-linear, and sensitive to the particular magnitude of mid-latitude forcing. Our results highlight the importance of considering both local and remote climate responses to large-scale vegetation change, and explore the scaling relationship between changes in vegetation cover and the resulting climate impacts.

  10. Brownian dynamics simulations of a flexible polymer chain which includes continuous resistance and multibody hydrodynamic interactions

    NASA Astrophysics Data System (ADS)

    Butler, Jason E.; Shaqfeh, Eric S. G.

    2005-01-01

    Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions.

  11. Brownian dynamics simulations of a flexible polymer chain which includes continuous resistance and multibody hydrodynamic interactions.

    PubMed

    Butler, Jason E; Shaqfeh, Eric S G

    2005-01-01

    Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions. (c) 2005 American Institute of Physics.

  12. QCD evolution of (un)polarized gluon TMDPDFs and the Higgs q T -distribution

    NASA Astrophysics Data System (ADS)

    Echevarria, Miguel G.; Kasemets, Tomas; Mulders, Piet J.; Pisano, Cristian

    2015-07-01

    We provide the proper definition of all the leading-twist (un)polarized gluon transverse momentum dependent parton distribution functions (TMDPDFs), by considering the Higgs boson transverse momentum distribution in hadron-hadron collisions and deriving the factorization theorem in terms of them. We show that the evolution of all the (un)polarized gluon TMDPDFs is driven by a universal evolution kernel, which can be resummed up to next-to-next-to-leading-logarithmic accuracy. Considering the proper definition of gluon TMDPDFs, we perform an explicit next-to-leading-order calculation of the unpolarized ( f {1/ g }), linearly polarized ( h {1/⊥ g }) and helicity ( g {1/L g }) gluon TMDPDFs, and show that, as expected, they are free from rapidity divergences. As a byproduct, we obtain the Wilson coefficients of the refactorization of these TMDPDFs at large transverse momentum. In particular, the coefficient of g {1/L g }, which has never been calculated before, constitutes a new and necessary ingredient for a reliable phenomenological extraction of this quantity, for instance at RHIC or the future AFTER@LHC or Electron-Ion Collider. The coefficients of f {1/ g } and h {1/⊥ g } have never been calculated in the present formalism, although they could be obtained by carefully collecting and recasting previous results in the new TMD formalism. We apply these results to analyze the contribution of linearly polarized gluons at different scales, relevant, for instance, for the inclusive production of the Higgs boson and the C-even pseudoscalar bottomonium state η b . Applying our resummation scheme we finally provide predictions for the Higgs boson q T -distribution at the LHC.

  13. The role of shock induced trailing-edge separation in limit cycle oscillations

    NASA Technical Reports Server (NTRS)

    Cunningham, Atlee M., Jr.

    1989-01-01

    The potential role of shock induced trailing edge separation (SITES) in limit cycle oscillations (LCO) was established. It was shown that the flip-flop characteristics of transition to and from SITES as well as its hysteresis could couple with wing modes with torsional motion and low damping. This connection led to the formulation of a very simple nonlinear math model using the linear equations of motion with a nonlinear step forcing function with hysteresis. A finite difference solution with time was developed and calculations were made for the F-111 TACT were used to determine the step forcing function due to SITES transition. Since no data were available for the hysteresis, a parameter study was conducted allowing the hysteresis effect to vary. Very small hysteresis effects, which were within expected bounds, were required to obtain reasonable response levels that essentially agreed with flight test results. Also in agreement with wind tunnel tests, LCO calculations for the 1/6 scale F-111 model showed that the model should have not experienced LCO.

  14. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  15. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  16. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE PAGES

    Zhang, Gaigong; Lin, Lin; Hu, Wei; ...

    2017-01-27

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  17. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin; Hu, Wei

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  18. Adaptive local basis set for Kohn-Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.

    2017-04-01

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  19. Linear scaling relationships and volcano plots in homogeneous catalysis – revisiting the Suzuki reaction† †Electronic supplementary information (ESI) available: Detailed derivation of the linear scaling relationships and construction of the volcano plots as well as comparisons of computed values using PBE0-dDsC and M06 functionals is included. See DOI: 10.1039/c5sc02910d Click here for additional data file.

    PubMed Central

    Busch, Michael; Wodrich, Matthew D.

    2015-01-01

    Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening. PMID:28757966

  20. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  1. Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring

    NASA Technical Reports Server (NTRS)

    Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John

    2014-01-01

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.

  2. Steady states and linear stability analysis of precipitation pattern formation at geothermal hot springs.

    PubMed

    Chan, Pak Yuen; Goldenfeld, Nigel

    2007-10-01

    A dynamical theory of geophysical precipitation pattern formation is presented and applied to irreversible calcium carbonate (travertine) deposition. Specific systems studied here are the terraces and domes observed at geothermal hot springs, such as those at Yellowstone National Park, and speleothems, particularly stalactites and stalagmites. The theory couples the precipitation front dynamics with shallow water flow, including corrections for turbulent drag and curvature effects. In the absence of capillarity and with a laminar flow profile, the theory predicts a one-parameter family of steady state solutions to the moving boundary problem describing the precipitation front. These shapes match the measured shapes near the vent at the top of observed travertine domes well. Closer to the base of the dome, the solutions deviate from observations and circular symmetry is broken by a fluting pattern, which we show is associated with capillary forces causing thin film break-up. We relate our model to that recently proposed for stalactite growth, and calculate the linear stability spectrum of both travertine domes and stalactites. Lastly, we apply the theory to the problem of precipitation pattern formation arising from turbulent flow down an inclined plane and identify a linear instability that underlies scale-invariant travertine terrace formation at geothermal hot springs.

  3. Steady states and linear stability analysis of precipitation pattern formation at geothermal hot springs

    NASA Astrophysics Data System (ADS)

    Chan, Pak Yuen; Goldenfeld, Nigel

    2007-10-01

    A dynamical theory of geophysical precipitation pattern formation is presented and applied to irreversible calcium carbonate (travertine) deposition. Specific systems studied here are the terraces and domes observed at geothermal hot springs, such as those at Yellowstone National Park, and speleothems, particularly stalactites and stalagmites. The theory couples the precipitation front dynamics with shallow water flow, including corrections for turbulent drag and curvature effects. In the absence of capillarity and with a laminar flow profile, the theory predicts a one-parameter family of steady state solutions to the moving boundary problem describing the precipitation front. These shapes match the measured shapes near the vent at the top of observed travertine domes well. Closer to the base of the dome, the solutions deviate from observations and circular symmetry is broken by a fluting pattern, which we show is associated with capillary forces causing thin film break-up. We relate our model to that recently proposed for stalactite growth, and calculate the linear stability spectrum of both travertine domes and stalactites. Lastly, we apply the theory to the problem of precipitation pattern formation arising from turbulent flow down an inclined plane and identify a linear instability that underlies scale-invariant travertine terrace formation at geothermal hot springs.

  4. The contribution of phosphate–phosphate repulsions to the free energy of DNA bending

    PubMed Central

    Range, Kevin; Mayaan, Evelyn; Maher, L. J.; York, Darrin M.

    2005-01-01

    DNA bending is important for the packaging of genetic material, regulation of gene expression and interaction of nucleic acids with proteins. Consequently, it is of considerable interest to quantify the energetic factors that must be overcome to induce bending of DNA, such as base stacking and phosphate–phosphate repulsions. In the present work, the electrostatic contribution of phosphate–phosphate repulsions to the free energy of bending DNA is examined for 71 bp linear and bent-form model structures. The bent DNA model was based on the crystallographic structure of a full turn of DNA in a nucleosome core particle. A Green's function approach based on a linear-scaling smooth conductor-like screening model was applied to ascertain the contribution of individual phosphate–phosphate repulsions and overall electrostatic stabilization in aqueous solution. The effect of charge neutralization by site-bound ions was considered using Monte Carlo simulation to characterize the distribution of ion occupations and contribution of phosphate repulsions to the free energy of bending as a function of counterion load. The calculations predict that the phosphate–phosphate repulsions account for ∼30% of the total free energy required to bend DNA from canonical linear B-form into the conformation found in the nucleosome core particle. PMID:15741179

  5. Multiple time scale analysis of pressure oscillations in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  6. The Effects of the Use of Microsoft Math Tool (Graphical Calculator) Instruction on Students' Performance in Linear Functions

    ERIC Educational Resources Information Center

    Kissi, Philip Siaw; Opoku, Gyabaah; Boateng, Sampson Kwadwo

    2016-01-01

    The aim of the study was to investigate the effect of Microsoft Math Tool (graphical calculator) on students' achievement in the linear function. The study employed Quasi-experimental research design (Pre-test Post-test two group designs). A total of ninety-eight (98) students were selected for the study from two different Senior High Schools…

  7. On the four-dimensional holoraumy of the 4D, 𝒩 = 1 complex linear supermultiplet

    NASA Astrophysics Data System (ADS)

    Caldwell, Wesley; Diaz, Alejandro N.; Friend, Isaac; Gates, S. James; Harmalkar, Siddhartha; Lambert-Brown, Tamar; Lay, Daniel; Martirosova, Karina; Meszaros, Victor A.; Omokanwaye, Mayowa; Rudman, Shaina; Shin, Daeljuck; Vershov, Anthony

    2018-04-01

    We present arguments to support the existence of weight spaces for supersymmetric field theories and identify the calculations of information about supermultiplets to define such spaces via the concept of “holoraumy.” For the first time, this is extended to the complex linear superfield by a calculation of the commutator of supercovariant derivatives on all of its component fields.

  8. A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1980-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.

  9. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  10. Growth of the eye lens: II. Allometric studies

    PubMed Central

    2014-01-01

    Purpose The purpose of this study was to examine the ontogeny and phylogeny of lens growth in a variety of species using allometry. Methods Data on the accumulation of wet and/or dry lens weight as a function of bodyweight were obtained for 40 species and subjected to allometric analysis to examine ontogenic growth and compaction. Allometric analysis was also used to compare the maximum adult lens weights for 147 species with the maximum adult bodyweight and to compare lens volumes calculated from wet and dry weights with eye volumes calculated from axial length. Results Linear allometric relationships were obtained for the comparison of ontogenic lens and bodyweight accumulation. The body mass exponent (BME) decreased with increasing animal size from around 1.0 in small rodents to 0.4 in large ungulates for both wet and dry weights. Compaction constants for the ontogenic growth ranged from 1.00 in birds and reptiles up to 1.30 in mammals. Allometric comparison of maximum lens wet and dry weights with maximum bodyweights also yielded linear plots with a BME of 0.504 for all warm blooded species except primates which had a BME of 0.25. When lens volumes were compared with eye volumes, all species yielded a scaling constant of 0.75 but the proportionality constants for primates and birds were lower. Conclusions Ontogenic lens growth is fastest, relative to body growth, in small animals and slowest in large animals. Fiber cell compaction takes place throughout life in most species, but not in birds and reptiles. Maximum adult lens size scales with eye size with the same exponent in all species, but birds and primates have smaller lenses relative to eye size than other species. Optical properties of the lens are generated through the combination of variations in the rate of growth, rate of compaction, shape and size. PMID:24715759

  11. Linear scaling computation of the Fock matrix. VI. Data parallel computation of the exchange-correlation matrix

    NASA Astrophysics Data System (ADS)

    Gan, Chee Kwan; Challacombe, Matt

    2003-05-01

    Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.

  12. Study of conformational stability, structural, electronic and charge transfer properties of cladrin using vibrational spectroscopy and DFT calculations.

    PubMed

    Singh, Swapnil; Singh, Harshita; Srivastava, Anubha; Tandon, Poonam; Sinha, Kirti; Bharti, Purnima; Kumar, Sudhir; Kumar, Padam; Maurya, Rakesh

    2014-11-11

    In the present work, a detailed conformational study of cladrin (3-(3,4-dimethoxy phenyl)-7-hydroxychromen-4-one) has been done by using spectroscopic techniques (FT-IR/FT-Raman/UV-Vis/NMR) and quantum chemical calculations. The optimized geometry, wavenumber and intensity of the vibrational bands of the cladrin in ground state were calculated by density functional theory (DFT) employing 6-311++G(d,p) basis sets. The study has been focused on the two most stable conformers that are selected after the full geometry optimization of the molecule. A detailed assignment of the FT-IR and FT-Raman spectra has been done for both the conformers along with potential energy distribution for each vibrational mode. The observed and scaled wavenumber of most of the bands has been found to be in good agreement. The UV-Vis spectrum has been recorded and compared with calculated spectrum. In addition, 1H and 13C nuclear magnetic resonance spectra have been also recorded and compared with the calculated data that shows the inter or intramolecular hydrogen bonding. The electronic properties such as HOMO-LUMO energies were calculated by using time-dependent density functional theory. Molecular electrostatic potential has been plotted to elucidate the reactive part of the molecule. Natural bond orbital analysis was performed to investigate the molecular stability. Non linear optical property of the molecule have been studied by calculating the electric dipole moment (μ) and the first hyperpolarizability (β) that results in the nonlinearity of the molecule. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Natural bond orbital analysis in the ONETEP code: applications to large protein systems.

    PubMed

    Lee, Louis P; Cole, Daniel J; Payne, Mike C; Skylaris, Chris-Kriton

    2013-03-05

    First principles electronic structure calculations are typically performed in terms of molecular orbitals (or bands), providing a straightforward theoretical avenue for approximations of increasing sophistication, but do not usually provide any qualitative chemical information about the system. We can derive such information via post-processing using natural bond orbital (NBO) analysis, which produces a chemical picture of bonding in terms of localized Lewis-type bond and lone pair orbitals that we can use to understand molecular structure and interactions. We present NBO analysis of large-scale calculations with the ONETEP linear-scaling density functional theory package, which we have interfaced with the NBO 5 analysis program. In ONETEP calculations involving thousands of atoms, one is typically interested in particular regions of a nanosystem whilst accounting for long-range electronic effects from the entire system. We show that by transforming the Non-orthogonal Generalized Wannier Functions of ONETEP to natural atomic orbitals, NBO analysis can be performed within a localized region in such a way that ensures the results are identical to an analysis on the full system. We demonstrate the capabilities of this approach by performing illustrative studies of large proteins--namely, investigating changes in charge transfer between the heme group of myoglobin and its ligands with increasing system size and between a protein and its explicit solvent, estimating the contribution of electronic delocalization to the stabilization of hydrogen bonds in the binding pocket of a drug-receptor complex, and observing, in situ, the n → π* hyperconjugative interactions between carbonyl groups that stabilize protein backbones. Copyright © 2012 Wiley Periodicals, Inc.

  14. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    NASA Astrophysics Data System (ADS)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  15. A Spatial Correlation Model of Permeability on the Columbia River Plateau

    NASA Astrophysics Data System (ADS)

    Jayne, R., Jr.; Pollyea, R. M.

    2017-12-01

    This study presents a spatial correlation model of regional scale permeability variability within the Columbia River Basalt Group (CRBG). The data were compiled from the literature, and include 893 aquifer test results from 598 individual wells. In order to quantify the spatial variation of permeability within the CRBG, three experimental variograms (two horizontal and one vertical) are calculated and then fit with a linear combination of mathematical models. The horizontal variograms show there is a 4.5:1 anisotropy ratio for the permeability correlation structure with a long-range correlation of 35 km at N40°E. The km-scale range of these variograms suggests that there is regional control on permeability within the CRBG. One plausible control on the permeability distribution is that rapid crustal loading during CRBG emplacement ( 80% over 1M years) resulted in an isostatic response where the Columbia Plateau had previously undergone subsidence. To support this hypothesis, we calculate a 200 m moving average of all permeability values with depth. This calculation shows that permeability generally follows a systematic decay until 1,100 m depth, beyond which the 200 m moving average permeability increases 3 orders of magnitude. Since basalt fracture networks govern permeability on Columbia River Plateau, this observation is consistent with basal flexure causing tensile stress that counteract lithostatic loading, thus maintaining higher than expected permeability at depth within the Columbia River Basalt Group. These results may have important implications for regional CRBG groundwater management, as well as engineered reservoirs for carbon capture and sequestration and nuclear waste storage.

  16. To bend or not to bend: electronic structural analysis of linear versus bent M-H-M interactions in dinickel bis(dialkylphosphino)methane complexes.

    PubMed

    Wilson, Zakiya S; Stanley, George G; Vicic, David A

    2010-06-21

    The M-H-M bonding in the dinuclear complexes Ni(2)(mu-H)(mu-P(2))(2)X(2) (P(2) = R(2)PCH(2)PR(2), R = iPr, Cy; X = Cl, Br) has been investigated. These dinickel A-frames were studied via density functional theory (DFT) calculations to analyze the factors that influence linear and bent M-H-M bonding. The DFT calculations indicate that the bent geometry is favored electronically, with ligand steric effects driving the formation of the linear M-H-M structures.

  17. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  18. The use of remote sensing and linear wave theory to model local wave energy around Alphonse Atoll, Seychelles

    NASA Astrophysics Data System (ADS)

    Hamylton, S.

    2011-12-01

    This paper demonstrates a practical step-wise method for modelling wave energy at the landscape scale using GIS and remote sensing techniques at Alphonse Atoll, Seychelles. Inputs are a map of the benthic surface (seabed) cover, a detailed bathymetric model derived from remotely sensed Compact Airborne Spectrographic Imager (CASI) data and information on regional wave heights. Incident energy at the reef crest around the atoll perimeter is calculated as a function of its deepwater value with wave parameters (significant wave height and period) hindcast in the offshore zone using the WaveWatch III application developed by the National Oceanographic and Atmospheric Administration. Energy modifications are calculated at constant intervals as waves transform over the forereef platform along a series of reef profile transects running into the atoll centre. Factors for shoaling, refraction and frictional attenuation are calculated at each interval for given changes in bathymetry and benthic coverage type and a nominal reduction in absolute energy is incorporated at the reef crest to account for wave breaking. Overall energy estimates are derived for a period of 5 years and related to spatial patterning of reef flat surface cover (sand and seagrass patches).

  19. Reduced-Order Modeling of 3D Rayleigh-Benard Turbulent Convection

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, Pedram; Grover, Piyush; Nabi, Saleh

    2017-11-01

    Accurate Reduced-Order Models (ROMs) of turbulent geophysical flows have broad applications in science and engineering; for example, to study the climate system or to perform real-time flow control/optimization in energy systems. Here we focus on 3D Rayleigh-Benard turbulent convection at the Rayleigh number of 106 as a prototype for turbulent geophysical flows, which are dominantly buoyancy driven. The purpose of the study is to evaluate and improve the performance of different model reduction techniques using this setting. One-dimensional ROMs for horizontally averaged temperature are calculated using several methods. Specifically, the Linear Response Function (LRF) of the system is calculated from a large DNS dataset using Dynamic Mode Decomposition (DMD) and Fluctuation-Dissipation Theorem (FDT). The LRF is also calculated using the Green's function method of Hassanzadeh and Kuang (2016, J. Atmos. Sci.), which is based on using numerous forced DNS runs. The performance of these LRFs in estimating the system's response to weak external forcings or controlling the time-mean flow are compared and contrasted. The spectral properties of the LRFs and the scaling of the accuracy with the length of the dataset (for the data-driven methods) are also discussed.

  20. Newly synthesized dihydroquinazoline derivative from the aspect of combined spectroscopic and computational study

    NASA Astrophysics Data System (ADS)

    El-Azab, Adel S.; Mary, Y. Sheena; Mary, Y. Shyma; Panicker, C. Yohannan; Abdel-Aziz, Alaa A.-M.; El-Sherbeny, Magda A.; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2017-04-01

    In this work, spectroscopic characterization of 2-(2-(4-oxo-3-phenethyl-3,4-dihydroquinazolin-2-ylthio)ethyl)isoindoline-1,3-dione have been obtained with experimentally and theoretically. Complete assignments of fundamental vibrations were performed on the basis of the potential energy distribution of the vibrational modes and good agreement between the experimental and scaled wavenumbers has been achieved. Frontier molecular orbitals have been used as indicators of stability and reactivity. Intramolecular interactions have been investigated by NBO analysis. The dipole moment, linear polarizability and first and second order hyperpolarizability values were also computed. In order to determine molecule sites prone to electrophilic attacks DFT calculations of average local ionization energy (ALIE) and Fukui functions have been performed as well. Intra-molecular non-covalent interactions have been determined and analyzed by the analysis of charge density. Stability of title molecule have also been investigated from the aspect of autoxidation, by calculations of bond dissociation energies (BDE), and hydrolysis, by calculations of radial distribution functions after molecular dynamics (MD) simulations. In order to assess the biological potential of the title compound a molecular docking study towards breast cancer type 2 complex has been performed.

Top