Sample records for linear basis functions

  1. Determination of many-electron basis functions for a quantum Hall ground state using Schur polynomials

    NASA Astrophysics Data System (ADS)

    Mandal, Sudhansu S.; Mukherjee, Sutirtha; Ray, Koushik

    2018-03-01

    A method for determining the ground state of a planar interacting many-electron system in a magnetic field perpendicular to the plane is described. The ground state wave-function is expressed as a linear combination of a set of basis functions. Given only the flux and the number of electrons describing an incompressible state, we use the combinatorics of partitioning the flux among the electrons to derive the basis wave-functions as linear combinations of Schur polynomials. The procedure ensures that the basis wave-functions form representations of the angular momentum algebra. We exemplify the method by deriving the basis functions for the 5/2 quantum Hall state with a few particles. We find that one of the basis functions is precisely the Moore-Read Pfaffian wave function.

  2. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  3. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  4. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    NASA Astrophysics Data System (ADS)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  5. Parameter and Structure Inference for Nonlinear Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Millonas, Mark

    2006-01-01

    A great many systems can be modeled in the non-linear dynamical systems framework, as x = f(x) + xi(t), where f() is the potential function for the system, and xi is the excitation noise. Modeling the potential using a set of basis functions, we derive the posterior for the basis coefficients. A more challenging problem is to determine the set of basis functions that are required to model a particular system. We show that using the Bayesian Information Criteria (BIC) to rank models, and the beam search technique, that we can accurately determine the structure of simple non-linear dynamical system models, and the structure of the coupling between non-linear dynamical systems where the individual systems are known. This last case has important ecological applications.

  6. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  7. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  8. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  10. Computation of indirect nuclear spin-spin couplings with reduced complexity in pure and hybrid density functional approximations.

    PubMed

    Luenser, Arne; Kussmann, Jörg; Ochsenfeld, Christian

    2016-09-28

    We present a (sub)linear-scaling algorithm to determine indirect nuclear spin-spin coupling constants at the Hartree-Fock and Kohn-Sham density functional levels of theory. Employing efficient integral algorithms and sparse algebra routines, an overall (sub)linear scaling behavior can be obtained for systems with a non-vanishing HOMO-LUMO gap. Calculations on systems with over 1000 atoms and 20 000 basis functions illustrate the performance and accuracy of our reference implementation. Specifically, we demonstrate that linear algebra dominates the runtime of conventional algorithms for 10 000 basis functions and above. Attainable speedups of our method exceed 6 × in total runtime and 10 × in the linear algebra steps for the tested systems. Furthermore, a convergence study of spin-spin couplings of an aminopyrazole peptide upon inclusion of the water environment is presented: using the new method it is shown that large solvent spheres are necessary to converge spin-spin coupling values.

  11. Frequency analysis via the method of moment functionals

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.; Pan, J. Q.

    1990-01-01

    Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.

  12. Analytic reconstruction of magnetic resonance imaging signal obtained from a periodic encoding field.

    PubMed

    Rybicki, F J; Hrovat, M I; Patz, S

    2000-09-01

    We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.

  13. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    PubMed

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  14. Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation

    NASA Astrophysics Data System (ADS)

    Satoh, Hideki

    An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.

  15. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads

    2018-06-01

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  16. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  17. Fully-Implicit Orthogonal Reconstructed Discontinuous Galerkin for Fluid Dynamics with Phase Change

    DOE PAGES

    Nourgaliev, R.; Luo, H.; Weston, B.; ...

    2015-11-11

    A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method’s capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing (AM). We focus on the method’s accuracy (in both space and time), as wellmore » as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver.« less

  18. Composite fermion basis for two-component Bose gases

    NASA Astrophysics Data System (ADS)

    Meyer, Marius; Liabotro, Ola

    The composite fermion (CF) construction is known to produce wave functions that are not necessarily orthogonal, or even linearly independent, after projection. While usually not a practical issue in the quantum Hall regime, we have previously shown that it presents a technical challenge for rotating Bose gases with low angular momentum. These are systems where the CF approach yield surprisingly good approximations to the exact eigenstates of weak short-range interactions, and so solving the problem of linearly dependent wave functions is of interest. It can also be useful for studying CF excitations for fermions. Here we present several ways of constructing a basis for the space of ``simple CF states'' for two-component rotating Bose gases in the lowest Landau level, and prove that they all give a basis. Using the basis, we study the structure of the lowest-lying state using so-called restricted wave functions. We also examine the scaling of the overlap between the exact and CF wave functions at the maximal possible angular momentum for simple states. This work was financially supported by the Research Council of Norway.

  19. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  20. On 2- and 3-person games on polyhedral sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belenky, A.S.

    1994-12-31

    Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less

  1. Understanding Individual-Level Change through the Basis Functions of a Latent Curve Model

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Harring, Jeffrey R.

    2017-01-01

    Latent curve models have become a popular approach to the analysis of longitudinal data. At the individual level, the model expresses an individual's response as a linear combination of what are called "basis functions" that are common to all members of a population and weights that may vary among individuals. This article uses…

  2. Biomedical Mathematics, Unit I: Measurement, Linear Functions and Dimensional Algebra. Student Text. Revised Version, 1975.

    ERIC Educational Resources Information Center

    Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.

    This text presents lessons relating specific mathematical concepts to the ideas, skills, and tasks pertinent to the health care field. Among other concepts covered are linear functions, vectors, trigonometry, and statistics. Many of the lessons use data acquired during science experiments as the basis for exercises in mathematics. Lessons present…

  3. Photoelectric angle converter

    NASA Astrophysics Data System (ADS)

    Podzharenko, Volodymyr A.; Kulakov, Pavlo I.

    2001-06-01

    The photo-electric angle transmitter of rotation is offered, at which the output voltage is linear function of entering magnitude. In a transmitter the linear phototransducer is used on the basis of pair photo diode -- operating amplifier, which output voltage is linear function of the area of an illuminated photosensitive stratum, and modulator of a light stream of the special shape, which ensures a linear dependence of this area from an angle of rotation. The transmitter has good frequent properties and can be used for dynamic measurements of an angular velocity and angle of rotation, in systems of exact drives and systems of autocontrol.

  4. A Galerkin approximation for linear elastic shallow shells

    NASA Astrophysics Data System (ADS)

    Figueiredo, I. N.; Trabucho, L.

    1992-03-01

    This work is a generalization to shallow shell models of previous results for plates by B. Miara (1989). Using the same basis functions as in the plate case, we construct a Galerkin approximation of the three-dimensional linearized elasticity problem, and establish some error estimates as a function of the thickness, the curvature, the geometry of the shell, the forces and the Lamé costants.

  5. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  6. A Solution to the Fundamental Linear Fractional Order Differential Equation

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1998-01-01

    This paper provides a solution to the fundamental linear fractional order differential equation, namely, (sub c)d(sup q, sub t) + ax(t) = bu(t). The impulse response solution is shown to be a series, named the F-function, which generalizes the normal exponential function. The F-function provides the basis for a qth order "fractional pole". Complex plane behavior is elucidated and a simple example, the inductor terminated semi- infinite lossy line, is used to demonstrate the theory.

  7. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  8. Fully-Implicit Reconstructed Discontinuous Galerkin Method for Stiff Multiphysics Problems

    NASA Astrophysics Data System (ADS)

    Nourgaliev, Robert

    2015-11-01

    A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing. We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and funded by the LDRD at LLNL under project tracking code 13-SI-002.

  9. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  10. Simulation of herbicide degradation in different soils by use of Pedo-transfer functions (PTF) and non-linear kinetics.

    PubMed

    von Götz, N; Richter, O

    1999-03-01

    The degradation behaviour of bentazone in 14 different soils was examined at constant temperature and moisture conditions. Two soils were examined at different temperatures. On the basis of these data the influence of soil properties and temperature on degradation was assessed and modelled. Pedo-transfer functions (PTF) in combination with a linear and a non-linear model were found suitable to describe the bentazone degradation in the laboratory as related to soil properties. The linear PTF can be combined with a rate related to the temperature to account for both soil property and temperature influence at the same time.

  11. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    PubMed

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  12. A hierarchical preconditioner for the electric field integral equation on unstructured meshes based on primal and dual Haar bases

    NASA Astrophysics Data System (ADS)

    Adrian, S. B.; Andriulli, F. P.; Eibert, T. F.

    2017-02-01

    A new hierarchical basis preconditioner for the electric field integral equation (EFIE) operator is introduced. In contrast to existing hierarchical basis preconditioners, it works on arbitrary meshes and preconditions both the vector and the scalar potential within the EFIE operator. This is obtained by taking into account that the vector and the scalar potential discretized with loop-star basis functions are related to the hypersingular and the single layer operator (i.e., the well known integral operators from acoustics). For the single layer operator discretized with piecewise constant functions, a hierarchical preconditioner can easily be constructed. Thus the strategy we propose in this work for preconditioning the EFIE is the transformation of the scalar and the vector potential into operators equivalent to the single layer operator and to its inverse. More specifically, when the scalar potential is discretized with star functions as source and testing functions, the resulting matrix is a single layer operator discretized with piecewise constant functions and multiplied left and right with two additional graph Laplacian matrices. By inverting these graph Laplacian matrices, the discretized single layer operator is obtained, which can be preconditioned with the hierarchical basis. Dually, when the vector potential is discretized with loop functions, the resulting matrix can be interpreted as a hypersingular operator discretized with piecewise linear functions. By leveraging on a scalar Calderón identity, we can interpret this operator as spectrally equivalent to the inverse single layer operator. Then we use a linear-in-complexity, closed-form inverse of the dual hierarchical basis to precondition the hypersingular operator. The numerical results show the effectiveness of the proposed preconditioner and the practical impact of theoretical developments in real case scenarios.

  13. [Detection of linear chromosomes and plasmids among 15 genera in the Actinomycetales].

    PubMed

    Ma, Ning; Ma, Wei; Jiang, Chenglin; Fang, Ping; Qin, Zhongjun

    2003-10-01

    Bacterial chromosomes and plasmids are commonly circular, however, linear chromosomes and plasmids were discovered among 5 genera of the Actinomycetales. Here, we use pulsed field gel electrophoresis to study the genomes of 19 species which belong to 15 genera in the Actinomycetales. All chromosomes of 19 species are linear DNA, and linear plasmids with different sizes and copy numbers are detected among 5 species. This work provide basis for investigating the possible novel functions of linear replicons beyond Streptomyces and also helps to develop Actinomycetales artificial linear chromosome.

  14. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  15. Mutual connectivity analysis (MCA) using generalized radial basis function neural networks for nonlinear functional connectivity network recovery in resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Nagarajan, Mahesh B.; Wismüller, Axel

    2016-03-01

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 +/- 0.037) as well as the underlying network structure (Rand index = 0.87 +/- 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  16. Mutual Connectivity Analysis (MCA) Using Generalized Radial Basis Function Neural Networks for Nonlinear Functional Connectivity Network Recovery in Resting-State Functional MRI.

    PubMed

    DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel

    2016-03-29

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  17. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  18. On the importance of local orbitals using second energy derivatives for d and f electrons

    NASA Astrophysics Data System (ADS)

    Karsai, Ferenc; Tran, Fabien; Blaha, Peter

    2017-11-01

    The all-electron linearized augmented plane wave (LAPW) methods are among the most accurate to solve the Kohn-Sham equations of density functional theory for periodic solids. In the LAPW methods, the unit cell is partitioned into spheres surrounding the atoms, inside which the wave functions are expanded into spherical harmonics, and the interstitial region, where the wave functions are expanded in Fourier series. Recently, Michalicek et al. (2013) reported an analysis of the so-called linearization error, which is inherent to the basis functions inside the spheres, and advocated the use of local orbital basis functions involving the second energy derivative of the radial part (HDLO). In the present work, we report the implementation of such basis functions into the WIEN2k code, and discuss in detail the improvement in terms of accuracy. From our tests, which involve atoms from the whole periodic table, it is concluded that for ground-state properties (e.g., equilibrium volume) the use of HDLO is necessary only for atoms with d or f electrons in the valence and large atomic spheres. For unoccupied states which are not too high above the Fermi energy, HDLO systematically improve the band structure, which may be of importance for the calculation of optical properties.

  19. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  20. Operator bases, S-matrices, and their partition functions

    NASA Astrophysics Data System (ADS)

    Henning, Brian; Lu, Xiaochuan; Melia, Tom; Murayama, Hitoshi

    2017-10-01

    Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. In this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce a partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most na¨ıve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.

  1. Individual differences in long-range time representation.

    PubMed

    Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi

    2017-04-01

    On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.

  2. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  3. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  4. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  5. An intangible energy in the functioning biosystem. II: Useful parallels with circuit theory and with non-linear optics.

    PubMed

    Reid, B L

    1995-06-01

    The argument is developed that a structure and function already exists in selected inanimate systems for an intangible energy dissipating these systems and that, in so doing, this energy exhibits certain properties, readily recognised in the functioning biosystem. The central thesis is that, during dissipation, the structure of the biosystem affords opportunity for an enhanced display of these properties, so that this structure can be rationally recognised as obligatory in the transition, inanimate to animate matter. The systems chosen are those of reactance in linear circuit theory of electronics, and some recent developments in non-linear optics, both of which rely on imaginary or quantal force to display observable effects. Discussion occurs on the fashion which the development of a statistical formalism as a basis for the study of squeezed states of light in these non-linear systems, has, at the same time, overcome a long standing veto on the practical use of quantal energy associated with the Uncertainty Principle of Heisenberg. These ideas are used to vindicate the suggestion that a theoretical basis is presently available for an engineering type approach, toward an intangible force as it exists in the biosystem. The origins and properties of such a force continue to be considered by many as immersed in mysticism.

  6. Determination of the temperature distribution in a minichannel using ANSYS CFX and a procedure based on the Trefftz functions

    NASA Astrophysics Data System (ADS)

    Maciejewska, Beata; Błasiak, Sławomir; Piasecka, Magdalena

    This work discusses the mathematical model for laminar-flow heat transfer in a minichannel. The boundary conditions in the form of temperature distributions on the outer sides of the channel walls were determined from experimental data. The data were collected from the experimental stand the essential part of which is a vertical minichannel 1.7 mm deep, 16 mm wide and 180 mm long, asymmetrically heated by a Haynes-230 alloy plate. Infrared thermography allowed determining temperature changes on the outer side of the minichannel walls. The problem was analysed numerically through either ANSYS CFX software or special calculation procedures based on the Finite Element Method and Trefftz functions in the thermal boundary layer. The Trefftz functions were used to construct the basis functions. Solutions to the governing differential equations were approximated with a linear combination of Trefftz-type basis functions. Unknown coefficients of the linear combination were calculated by minimising the functional. The results of the comparative analysis were represented in a graphical form and discussed.

  7. Preface: Introductory Remarks: Linear Scaling Methods

    NASA Astrophysics Data System (ADS)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085

  8. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  9. Highly efficient implementation of pseudospectral time-dependent density-functional theory for the calculation of excitation energies of large molecules.

    PubMed

    Cao, Yixiang; Hughes, Thomas; Giesen, Dave; Halls, Mathew D; Goldberg, Alexander; Vadicherla, Tati Reddy; Sastry, Madhavi; Patel, Bhargav; Sherman, Woody; Weisman, Andrew L; Friesner, Richard A

    2016-06-15

    We have developed and implemented pseudospectral time-dependent density-functional theory (TDDFT) in the quantum mechanics package Jaguar to calculate restricted singlet and restricted triplet, as well as unrestricted excitation energies with either full linear response (FLR) or the Tamm-Dancoff approximation (TDA) with the pseudospectral length scales, pseudospectral atomic corrections, and pseudospectral multigrid strategy included in the implementations to improve the chemical accuracy and to speed the pseudospectral calculations. The calculations based on pseudospectral time-dependent density-functional theory with full linear response (PS-FLR-TDDFT) and within the Tamm-Dancoff approximation (PS-TDA-TDDFT) for G2 set molecules using B3LYP/6-31G*(*) show mean and maximum absolute deviations of 0.0015 eV and 0.0081 eV, 0.0007 eV and 0.0064 eV, 0.0004 eV and 0.0022 eV for restricted singlet excitation energies, restricted triplet excitation energies, and unrestricted excitation energies, respectively; compared with the results calculated from the conventional spectral method. The application of PS-FLR-TDDFT to OLED molecules and organic dyes, as well as the comparisons for results calculated from PS-FLR-TDDFT and best estimations demonstrate that the accuracy of both PS-FLR-TDDFT and PS-TDA-TDDFT. Calculations for a set of medium-sized molecules, including Cn fullerenes and nanotubes, using the B3LYP functional and 6-31G(**) basis set show PS-TDA-TDDFT provides 19- to 34-fold speedups for Cn fullerenes with 450-1470 basis functions, 11- to 32-fold speedups for nanotubes with 660-3180 basis functions, and 9- to 16-fold speedups for organic molecules with 540-1340 basis functions compared to fully analytic calculations without sacrificing chemical accuracy. The calculations on a set of larger molecules, including the antibiotic drug Ramoplanin, the 46-residue crambin protein, fullerenes up to C540 and nanotubes up to 14×(6,6), using the B3LYP functional and 6-31G(**) basis set with up to 8100 basis functions show that PS-FLR-TDDFT CPU time scales as N(2.05) with the number of basis functions. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  11. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  12. The underlying pathway structure of biochemical reaction networks

    PubMed Central

    Schilling, Christophe H.; Palsson, Bernhard O.

    1998-01-01

    Bioinformatics is yielding extensive, and in some cases complete, genetic and biochemical information about individual cell types and cellular processes, providing the composition of living cells and the molecular structure of its components. These components together perform integrated cellular functions that now need to be analyzed. In particular, the functional definition of biochemical pathways and their role in the context of the whole cell is lacking. In this study, we show how the mass balance constraints that govern the function of biochemical reaction networks lead to the translation of this problem into the realm of linear algebra. The functional capabilities of biochemical reaction networks, and thus the choices that cells can make, are reflected in the null space of their stoichiometric matrix. The null space is spanned by a finite number of basis vectors. We present an algorithm for the synthesis of a set of basis vectors for spanning the null space of the stoichiometric matrix, in which these basis vectors represent the underlying biochemical pathways that are fundamental to the corresponding biochemical reaction network. In other words, all possible flux distributions achievable by a defined set of biochemical reactions are represented by a linear combination of these basis pathways. These basis pathways thus represent the underlying pathway structure of the defined biochemical reaction network. This development is significant from a fundamental and conceptual standpoint because it yields a holistic definition of biochemical pathways in contrast to definitions that have arisen from the historical development of our knowledge about biochemical processes. Additionally, this new conceptual framework will be important in defining, characterizing, and studying biochemical pathways from the rapidly growing information on cellular function. PMID:9539712

  13. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    PubMed

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  14. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  15. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  16. Operator bases, S-matrices, and their partition functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henning, Brian; Lu, Xiaochuan; Melia, Tom

    Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. Here in this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce amore » partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most naÏve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Finally, although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.« less

  17. Operator bases, S-matrices, and their partition functions

    DOE PAGES

    Henning, Brian; Lu, Xiaochuan; Melia, Tom; ...

    2017-10-27

    Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. Here in this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce amore » partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most naÏve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Finally, although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.« less

  18. Approximating a retarded-advanced differential equation that models human phonation

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2017-11-01

    In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.

  19. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  20. Neural-like computing with populations of superparamagnetic basis functions.

    PubMed

    Mizrahi, Alice; Hirtzlin, Tifenn; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Grollier, Julie; Querlioz, Damien

    2018-04-18

    In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power.

  1. Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter Systems.

    PubMed

    Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing

    2018-02-01

    Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.

  2. Estimation of Energy Expenditure Using a Patch-Type Sensor Module with an Incremental Radial Basis Function Neural Network

    PubMed Central

    Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae

    2016-01-01

    Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249

  3. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  4. Predicting haemodynamic networks using electrophysiology: The role of non-linear and cross-frequency interactions

    PubMed Central

    Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.

    2016-01-01

    Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811

  5. Min-Max Spaces and Complexity Reduction in Min-Max Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaubert, Stephane, E-mail: Stephane.Gaubert@inria.fr; McEneaney, William M., E-mail: wmceneaney@ucsd.edu

    2012-06-15

    Idempotent methods have been found to be extremely helpful in the numerical solution of certain classes of nonlinear control problems. In those methods, one uses the fact that the value function lies in the space of semiconvex functions (in the case of maximizing controllers), and approximates this value using a truncated max-plus basis expansion. In some classes, the value function is actually convex, and then one specifically approximates with suprema (i.e., max-plus sums) of affine functions. Note that the space of convex functions is a max-plus linear space, or moduloid. In extending those concepts to game problems, one finds amore » different function space, and different algebra, to be appropriate. Here we consider functions which may be represented using infima (i.e., min-max sums) of max-plus affine functions. It is natural to refer to the class of functions so represented as the min-max linear space (or moduloid) of max-plus hypo-convex functions. We examine this space, the associated notion of duality and min-max basis expansions. In using these methods for solution of control problems, and now games, a critical step is complexity-reduction. In particular, one needs to find reduced-complexity expansions which approximate the function as well as possible. We obtain a solution to this complexity-reduction problem in the case of min-max expansions.« less

  6. Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram

    PubMed Central

    Kim, Jongin; Park, Hyeong-jun

    2016-01-01

    The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128

  7. General contraction of Gaussian basis sets. II - Atomic natural orbitals and the calculation of atomic and molecular properties

    NASA Technical Reports Server (NTRS)

    Almlof, Jan; Taylor, Peter R.

    1990-01-01

    A recently proposed scheme for using natural orbitals from atomic configuration interaction wave functions as a basis set for linear combination of atomic orbitals (LCAO) calculations is extended for the calculation of molecular properties. For one-electron properties like multipole moments, which are determined largely by the outermost regions of the molecular wave function, it is necessary to increase the flexibility of the basis in these regions. This is most easily done by uncontracting the outermost Gaussian primitives, and/or by adding diffuse primitives. A similar approach can be employed for the calculation of polarizabilities. Properties which are not dominated by the long-range part of the wave function, such as spectroscopic constants or electric field gradients at the nucleus, can generally be treated satisfactorily with the original atomic natural orbital sets.

  8. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  9. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  10. SCF and CI calculations of the dipole moment function of ozone. [Self-Consistent Field and Configuration-Interaction

    NASA Technical Reports Server (NTRS)

    Curtiss, L. A.; Langhoff, S. R.; Carney, G. D.

    1979-01-01

    The constant and linear terms in a Taylor series expansion of the dipole moment function of the ground state of ozone are calculated with Cartesian Gaussian basis sets ranging in quality from minimal to double zeta plus polarization. Results are presented at both the self-consistent field and configuration-interaction levels. Although the algebraic signs of the linear dipole moment derivatives are all established to be positive, the absolute magnitudes of these quantities, as well as the infrared intensities calculated from them, vary considerably with the level of theory.

  11. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  12. Boundary-Layer Receptivity and Integrated Transition Prediction

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan

    2005-01-01

    The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.

  13. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    PubMed

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-11-01

    The dominant approach to neuroimaging data analysis employs the voxel as the unit of computation. While convenient, voxels lack biological meaning and their size is arbitrarily determined by the resolution of the image. Here, we propose a multivariate spatial model in which neuroimaging data are characterised as a linearly weighted combination of multiscale basis functions which map onto underlying brain nuclei or networks or nuclei. In this model, the elementary building blocks are derived to reflect the functional anatomy of the brain during the resting state. This model is estimated using a Bayesian framework which accurately quantifies uncertainty and automatically finds the most accurate and parsimonious combination of basis functions describing the data. We demonstrate the utility of this framework by predicting quantitative SPECT images of striatal dopamine function and we compare a variety of basis sets including generic isotropic functions, anatomical representations of the striatum derived from structural MRI, and two different soft functional parcellations of the striatum derived from resting-state fMRI (rfMRI). We found that a combination of ∼50 multiscale functional basis functions accurately represented the striatal dopamine activity, and that functional basis functions derived from an advanced parcellation technique known as Instantaneous Connectivity Parcellation (ICP) provided the most parsimonious models of dopamine function. Importantly, functional basis functions derived from resting fMRI were more accurate than both structural and generic basis sets in representing dopamine function in the striatum for a fixed model order. We demonstrate the translational validity of our framework by constructing classification models for discriminating parkinsonian disorders and their subtypes. Here, we show that ICP approach is the only basis set that performs well across all comparisons and performs better overall than the classical voxel-based approach. This spatial model constitutes an elegant alternative to voxel-based approaches in neuroimaging studies; not only are their atoms biologically informed, they are also adaptive to high resolutions, represent high dimensions efficiently, and capture long-range spatial dependencies, which are important and challenging objectives for neuroimaging data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  15. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  16. Non-linear molecular pattern classification using molecular beacons with multiple targets.

    PubMed

    Lee, In-Hee; Lee, Seung Hwan; Park, Tai Hyun; Zhang, Byoung-Tak

    2013-12-01

    In vitro pattern classification has been highlighted as an important future application of DNA computing. Previous work has demonstrated the feasibility of linear classifiers using DNA-based molecular computing. However, complex tasks require non-linear classification capability. Here we design a molecular beacon that can interact with multiple targets and experimentally shows that its fluorescent signals form a complex radial-basis function, enabling it to be used as a building block for non-linear molecular classification in vitro. The proposed method was successfully applied to solving artificial and real-world classification problems: XOR and microRNA expression patterns. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  18. The construction of general basis functions in reweighting ensemble dynamics simulations: Reproduce equilibrium distribution in complex systems from multiple short simulation trajectories

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Biao; Ming, Li; Xin, Zhou

    2015-12-01

    Ensemble simulations, which use multiple short independent trajectories from dispersive initial conformations, rather than a single long trajectory as used in traditional simulations, are expected to sample complex systems such as biomolecules much more efficiently. The re-weighted ensemble dynamics (RED) is designed to combine these short trajectories to reconstruct the global equilibrium distribution. In the RED, a number of conformational functions, named as basis functions, are applied to relate these trajectories to each other, then a detailed-balance-based linear equation is built, whose solution provides the weights of these trajectories in equilibrium distribution. Thus, the sufficient and efficient selection of basis functions is critical to the practical application of RED. Here, we review and present a few possible ways to generally construct basis functions for applying the RED in complex molecular systems. Especially, for systems with less priori knowledge, we could generally use the root mean squared deviation (RMSD) among conformations to split the whole conformational space into a set of cells, then use the RMSD-based-cell functions as basis functions. We demonstrate the application of the RED in typical systems, including a two-dimensional toy model, the lattice Potts model, and a short peptide system. The results indicate that the RED with the constructions of basis functions not only more efficiently sample the complex systems, but also provide a general way to understand the metastable structure of conformational space. Project supported by the National Natural Science Foundation of China (Grant No. 11175250).

  19. An analysis of value function learning with piecewise linear control

    NASA Astrophysics Data System (ADS)

    Tutsoy, Onder; Brown, Martin

    2016-05-01

    Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.

  20. Reduced-cost linear-response CC2 method based on natural orbitals and natural auxiliary functions

    PubMed Central

    Mester, Dávid

    2017-01-01

    A reduced-cost density fitting (DF) linear-response second-order coupled-cluster (CC2) method has been developed for the evaluation of excitation energies. The method is based on the simultaneous truncation of the molecular orbital (MO) basis and the auxiliary basis set used for the DF approximation. For the reduction of the size of the MO basis, state-specific natural orbitals (NOs) are constructed for each excited state using the average of the second-order Møller–Plesset (MP2) and the corresponding configuration interaction singles with perturbative doubles [CIS(D)] density matrices. After removing the NOs of low occupation number, natural auxiliary functions (NAFs) are constructed [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], and the NAF basis is also truncated. Our results show that, for a triple-zeta basis set, about 60% of the virtual MOs can be dropped, while the size of the fitting basis can be reduced by a factor of five. This results in a dramatic reduction of the computational costs of the solution of the CC2 equations, which are in our approach about as expensive as the evaluation of the MP2 and CIS(D) density matrices. All in all, an average speedup of more than an order of magnitude can be achieved at the expense of a mean absolute error of 0.02 eV in the calculated excitation energies compared to the canonical CC2 results. Our benchmark calculations demonstrate that the new approach enables the efficient computation of CC2 excitation energies for excited states of all types of medium-sized molecules composed of up to 100 atoms with triple-zeta quality basis sets. PMID:28527453

  1. General contraction of Gaussian basis sets. Part 2: Atomic natural orbitals and the calculation of atomic and molecular properties

    NASA Technical Reports Server (NTRS)

    Almloef, Jan; Taylor, Peter R.

    1989-01-01

    A recently proposed scheme for using natural orbitals from atomic configuration interaction (CI) wave functions as a basis set for linear combination of atomic orbitals (LCAO) calculations is extended for the calculation of molecular properties. For one-electron properties like multipole moments, which are determined largely by the outermost regions of the molecular wave function, it is necessary to increase the flexibility of the basis in these regions. This is most easily done by uncontracting the outmost Gaussian primitives, and/or by adding diffuse primitives. A similar approach can be employed for the calculation of polarizabilities. Properties which are not dominated by the long-range part of the wave function, such as spectroscopic constants or electric field gradients at the nucleus, can generally be treated satisfactorily with the original atomic natural orbital (ANO) sets.

  2. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    NASA Astrophysics Data System (ADS)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  3. The use of Galerkin finite-element methods to solve mass-transport equations

    USGS Publications Warehouse

    Grove, David B.

    1977-01-01

    The partial differential equation that describes the transport and reaction of chemical solutes in porous media was solved using the Galerkin finite-element technique. These finite elements were superimposed over finite-difference cells used to solve the flow equation. Both convection and flow due to hydraulic dispersion were considered. Linear and Hermite cubic approximations (basis functions) provided satisfactory results: however, the linear functions were computationally more efficient for two-dimensional problems. Successive over relaxation (SOR) and iteration techniques using Tchebyschef polynomials were used to solve the sparce matrices generated using the linear and Hermite cubic functions, respectively. Comparisons of the finite-element methods to the finite-difference methods, and to analytical results, indicated that a high degree of accuracy may be obtained using the method outlined. The technique was applied to a field problem involving an aquifer contaminated with chloride, tritium, and strontium-90. (Woodard-USGS)

  4. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    NASA Astrophysics Data System (ADS)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  5. Student Learning of Basis, Span and Linear Independence in Linear Algebra

    ERIC Educational Resources Information Center

    Stewart, Sepideh; Thomas, Michael O. J.

    2010-01-01

    One of the earlier, more challenging concepts in linear algebra at university is that of basis. Students are often taught procedurally how to find a basis for a subspace using matrix manipulation, but may struggle with understanding the construct of basis, making further progress harder. We believe one reason for this is because students have…

  6. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  7. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  8. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  9. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  10. The physical basis for estimating wave energy spectra from SAR imagery

    NASA Technical Reports Server (NTRS)

    Lyzenga, David R.

    1987-01-01

    Ocean surface waves are imaged by synthetic aperture radar (SAR) through a combination of the effects of changes in the surface slope, surface roughness, and surface motion. Over a limited range of conditions, each of these effects can be described in terms of a linear modulation-transfer function. In such cases, the wave-height spectrum can be estimated in a straightforward manner from the SAR image-intensity spectrum. The range of conditions over which this assumption of linearity is valid is investigated using a numerical simulation model, and the implications of various departures from linearity are discussed.

  11. The adequate stimulus for mammalian linear vestibular evoked potentials (VsEPs)

    PubMed Central

    Jones, Timothy A.; Jones, Sherri M.; Vijayakumar, Sarath; Brugeaud, Aurore; Bothwell, Marcella; Chabbert, Christian

    2013-01-01

    Short latency linear vestibular sensory evoked potentials (VsEPs) provide a means to objectively and directly assess the function of gravity receptors in mammals and birds. The importance of this functional measure is illustrated by its use in studies of the genetic basis of vestibular function and disease. Head motion is the stimulus for the VsEP. In the bird, it has been established that neurons mediating the linear VsEP respond collectively to the rate of change in linear acceleration during head movement (i.e. jerk) rather than peak acceleration. The kinematic element of motion responsible for triggering mammalian VsEPs has not been characterized in detail. Here we tested the hypothesis that jerk is the kinematic component of head motion responsible for VsEP characteristics. VsEP amplitudes and latencies changed systematically when peak acceleration level was held constant and jerk level was varied from ~0.9 to 4.6 g/ms. In contrast, responses remained relatively constant when kinematic jerk was held constant and peak acceleration was varied from ~0.9 to 5.5g in mice and ~0.44 to 2.75g in rats. Thus the mammalian VsEP depends on jerk levels and not peak acceleration. We conclude that kinematic jerk is the adequate stimulus for the mammalian VsEP. This sheds light on the behavior of neurons generating the response. The results also provide the basis for standardizing the reporting of stimulus levels, which is key to ensuring that response characteristics reported in the literature by many laboratories can be effectively compared and interpreted. PMID:21664446

  12. Novel two-way artificial boundary condition for 2D vertical water wave propagation modelled with Radial-Basis-Function Collocation Method

    NASA Astrophysics Data System (ADS)

    Mueller, A.

    2018-04-01

    A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.

  13. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  14. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    PubMed

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  15. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  16. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  17. Evaluating and interpreting the chemical relevance of the linear response kernel for atoms II: open shell.

    PubMed

    Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul

    2014-07-28

    Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.

  18. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  19. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE PAGES

    Zhang, Gaigong; Lin, Lin; Hu, Wei; ...

    2017-01-27

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  20. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin; Hu, Wei

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  1. Adaptive local basis set for Kohn-Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.

    2017-04-01

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  2. Generalised Transfer Functions of Neural Networks

    NASA Astrophysics Data System (ADS)

    Fung, C. F.; Billings, S. A.; Zhang, H.

    1997-11-01

    When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.

  3. Unsteady Solution of Non-Linear Differential Equations Using Walsh Function Series

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2015-01-01

    Walsh functions form an orthonormal basis set consisting of square waves. The discontinuous nature of square waves make the system well suited for representing functions with discontinuities. The product of any two Walsh functions is another Walsh function - a feature that can radically change an algorithm for solving non-linear partial differential equations (PDEs). The solution algorithm of non-linear differential equations using Walsh function series is unique in that integrals and derivatives may be computed using simple matrix multiplication of series representations of functions. Solutions to PDEs are derived as functions of wave component amplitude. Three sample problems are presented to illustrate the Walsh function series approach to solving unsteady PDEs. These include an advection equation, a Burgers equation, and a Riemann problem. The sample problems demonstrate the use of the Walsh function solution algorithms, exploiting Fast Walsh Transforms in multi-dimensions (O(Nlog(N))). Details of a Fast Walsh Reciprocal, defined here for the first time, enable inversion of aWalsh Symmetric Matrix in O(Nlog(N)) operations. Walsh functions have been derived using a fractal recursion algorithm and these fractal patterns are observed in the progression of pairs of wave number amplitudes in the solutions. These patterns are most easily observed in a remapping defined as a fractal fingerprint (FFP). A prolongation of existing solutions to the next highest order exploits these patterns. The algorithms presented here are considered a work in progress that provide new alternatives and new insights into the solution of non-linear PDEs.

  4. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  5. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  6. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE PAGES

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; ...

    2018-02-07

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  7. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  8. Hybrid density-functional calculations of phonons in LaCoO3

    NASA Astrophysics Data System (ADS)

    Gryaznov, Denis; Evarestov, Robert A.; Maier, Joachim

    2010-12-01

    Phonon frequencies at Γ point in nonmagnetic rhombohedral phase of LaCoO3 were calculated using density-functional theory with hybrid exchange correlation functional PBE0. The calculations involved a comparison of results for two types of basis functions commonly used in ab initio calculations, namely, the plane-wave approach and linear combination of atomic orbitals, as implemented in VASP and CRYSTAL computer codes, respectively. A good qualitative, but also within an error margin of less than 30%, a quantitative agreement was observed not only between the two formalisms but also between theoretical and experimental phonon frequency predictions. Moreover, the correlation between the phonon symmetries in cubic and rhombohedral phases is discussed in detail on the basis of group-theoretical analysis. It is concluded that the hybrid PBE0 functional is able to predict correctly the phonon properties in LaCoO3 .

  9. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  10. Collective Human Mobility Pattern from Taxi Trips in Urban Area

    PubMed Central

    Peng, Chengbin; Jin, Xiaogang; Wong, Ka-Chun; Shi, Meixia; Liò, Pietro

    2012-01-01

    We analyze the passengers' traffic pattern for 1.58 million taxi trips of Shanghai, China. By employing the non-negative matrix factorization and optimization methods, we find that, people travel on workdays mainly for three purposes: commuting between home and workplace, traveling from workplace to workplace, and others such as leisure activities. Therefore, traffic flow in one area or between any pair of locations can be approximated by a linear combination of three basis flows, corresponding to the three purposes respectively. We name the coefficients in the linear combination as traffic powers, each of which indicates the strength of each basis flow. The traffic powers on different days are typically different even for the same location, due to the uncertainty of the human motion. Therefore, we provide a probability distribution function for the relative deviation of the traffic power. This distribution function is in terms of a series of functions for normalized binomial distributions. It can be well explained by statistical theories and is verified by empirical data. These findings are applicable in predicting the road traffic, tracing the traffic pattern and diagnosing the traffic related abnormal events. These results can also be used to infer land uses of urban area quite parsimoniously. PMID:22529917

  11. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  12. Progress in calculating the potential energy surface of H3+.

    PubMed

    Adamowicz, Ludwik; Pavanello, Michele

    2012-11-13

    The most accurate electronic structure calculations are performed using wave function expansions in terms of basis functions explicitly dependent on the inter-electron distances. In our recent work, we use such basis functions to calculate a highly accurate potential energy surface (PES) for the H(3)(+) ion. The functions are explicitly correlated Gaussians, which include inter-electron distances in the exponent. Key to obtaining the high accuracy in the calculations has been the use of the analytical energy gradient determined with respect to the Gaussian exponential parameters in the minimization of the Rayleigh-Ritz variational energy functional. The effective elimination of linear dependences between the basis functions and the automatic adjustment of the positions of the Gaussian centres to the changing molecular geometry of the system are the keys to the success of the computational procedure. After adiabatic and relativistic corrections are added to the PES and with an effective accounting of the non-adiabatic effects in the calculation of the rotational/vibrational states, the experimental H(3)(+) rovibrational spectrum is reproduced at the 0.1 cm(-1) accuracy level up to 16,600 cm(-1) above the ground state.

  13. Reduced Order Methods for Prediction of Thermal-Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, A.; Rizzi, S. A.

    2004-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing random nonlinear vibrations in a presence of thermal loading. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  14. A Nonlinear Reduced Order Method for Prediction of Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing geometrically nonlinear random vibrations. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  15. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  16. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  17. Macrocell path loss prediction using artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.

    2014-04-01

    The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.

  18. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  19. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    PubMed

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  20. Approximate formulas for elasticity of the Tornquist functions and some their advantages

    NASA Astrophysics Data System (ADS)

    Issin, Meyram

    2017-09-01

    In this article functions of demand for prime necessity, second necessity and luxury goods depending on the income are considered. These functions are called Tornquist functions. By means of the return model the demand for prime necessity goods and second necessity goods are approximately described. Then on the basis of a method of the smallest squares approximate formulas for elasticity of these Tornquist functions are received. To receive an approximate formula for elasticity of function of demand for luxury goods, the linear asymptotic formula is constructed for this function. Some benefits of approximate formulas for elasticity of Tornquist functions are specified.

  1. Study of Equatorial Ionospheric irregularities and Mapping of Electron Density Profiles and Ionograms

    DTIC Science & Technology

    2012-03-09

    equation is a product of a complex basis vector in Jackson and a linear combination of plane wave functions. We convert both the amplitudes and the...wave function arguments from complex scalars to complex vectors . This conversion allows us to separate the electric field vector and the imaginary...magnetic field vector , because exponentials of imaginary scalars convert vectors to imaginary vectors and vice versa, while ex- ponentials of imaginary

  2. Agouti signaling protein stimulates cell division in "viable yellow" (A vy/a) mouse liver

    USDA-ARS?s Scientific Manuscript database

    Enhanced linear growth, hyperplasia, and tumorigenesis are well-known characteristics of "viable yellow" agouti Avy/- mice (1); however, the functional basis for this aspect of the phenotype is unknown. In the present study, we ascertained whether agouti signaling protein (ASIP) levels in Avy/a or a...

  3. The numerical study and comparison of radial basis functions in applications of the dual reciprocity boundary element method to convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Chanthawara, Krittidej; Kaennakham, Sayan; Toutip, Wattana

    2016-02-01

    The methodology of Dual Reciprocity Boundary Element Method (DRBEM) is applied to the convection-diffusion problems and investigating its performance is our first objective of the work. Seven types of Radial Basis Functions (RBF); Linear, Thin-plate Spline, Cubic, Compactly Supported, Inverse Multiquadric, Quadratic, and that proposed by [12], were closely investigated in order to numerically compare their effectiveness drawbacks etc. and this is taken as our second objective. A sufficient number of simulations were performed covering as many aspects as possible. Varidated against both exacts and other numerical works, the final results imply strongly that the Thin-Plate Spline and Linear type of RBF are superior to others in terms of both solutions' quality and CPU-time spent while the Inverse Multiquadric seems to poorly yield the results. It is also found that DRBEM can perform relatively well at moderate level of convective force and as anticipated becomes unstable when the problem becomes more convective-dominated, as normally found in all classical mesh-dependence methods.

  4. Numerical study of the shape parameter dependence of the local radial point interpolation method in linear elasticity.

    PubMed

    Moussaoui, Ahmed; Bouziane, Touria

    2016-01-01

    The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).

  5. Open-Ended Recursive Approach for the Calculation of Multiphoton Absorption Matrix Elements

    PubMed Central

    2015-01-01

    We present an implementation of single residues for response functions to arbitrary order using a recursive approach. Explicit expressions in terms of density-matrix-based response theory for the single residues of the linear, quadratic, cubic, and quartic response functions are also presented. These residues correspond to one-, two-, three- and four-photon transition matrix elements. The newly developed code is used to calculate the one-, two-, three- and four-photon absorption cross sections of para-nitroaniline and para-nitroaminostilbene, making this the first treatment of four-photon absorption in the framework of response theory. We find that the calculated multiphoton absorption cross sections are not very sensitive to the size of the basis set as long as a reasonably large basis set with diffuse functions is used. The choice of exchange–correlation functional, however, significantly affects the calculated cross sections of both charge-transfer transitions and other transitions, in particular, for the larger para-nitroaminostilbene molecule. We therefore recommend the use of a range-separated exchange–correlation functional in combination with the augmented correlation-consistent double-ζ basis set aug-cc-pVDZ for the calculation of multiphoton absorption properties. PMID:25821415

  6. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  7. Reciprocity principle in duct acoustics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.

    1979-01-01

    Various reciprocity relations in duct acoustics have been derived on the basis of the spatial reciprocity principle implied in Green's functions for linear waves. The derivation includes the reciprocity relations between mode conversion coefficients for reflection and transmission in nonuniform ducts, and the relation between the radiation of a mode from an arbitrarily terminated duct and the absorption of an externally incident plane wave by the duct. Such relations are well defined as long as the systems remain linear, regardless of acoustic properties of duct nonuniformities which cause the mode conversions.

  8. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  9. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  11. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  12. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  13. The Oscillating Circular Airfoil on the Basis of Potential Theory

    NASA Technical Reports Server (NTRS)

    Schade, T.; Krienes, K.

    1947-01-01

    Proceeding from the thesis by W. Kinner the present report treats the problem of the circular airfoil in uniform airflow executing small oscillations, the amplitudes of which correspond to whole functions of the second degree in x and y. The pressure distribution is secured by means of Prandtl's acceleration potential. It results in a system of linear equations the coefficients of which can be calculated exactly with the aid of exponential functions and Hankel's functions. The equations necessary are derived in part I; the numerical calculation follows in part II.

  14. Cerebellar-inspired algorithm for adaptive control of nonlinear dielectric elastomer-based artificial muscle

    PubMed Central

    Assaf, Tareq; Rossiter, Jonathan M.; Porrill, John

    2016-01-01

    Electroactive polymer actuators are important for soft robotics, but can be difficult to control because of compliance, creep and nonlinearities. Because biological control mechanisms have evolved to deal with such problems, we investigated whether a control scheme based on the cerebellum would be useful for controlling a nonlinear dielectric elastomer actuator, a class of artificial muscle. The cerebellum was represented by the adaptive filter model, and acted in parallel with a brainstem, an approximate inverse plant model. The recurrent connections between the two allowed for direct use of sensory error to adjust motor commands. Accurate tracking of a displacement command in the actuator's nonlinear range was achieved by either semi-linear basis functions in the cerebellar model or semi-linear functions in the brainstem corresponding to recruitment in biological muscle. In addition, allowing transfer of training between cerebellum and brainstem as has been observed in the vestibulo-ocular reflex prevented the steady increase in cerebellar output otherwise required to deal with creep. The extensibility and relative simplicity of the cerebellar-based adaptive-inverse control scheme suggests that it is a plausible candidate for controlling this type of actuator. Moreover, its performance highlights important features of biological control, particularly nonlinear basis functions, recruitment and transfer of training. PMID:27655667

  15. An ab initio study of the C3(+) cation using multireference methods

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.; Martin, J. M. L.; Francois, J. P.; Gijbels, R.

    1991-01-01

    The energy difference between the linear 2 sigma(sup +, sub u) and cyclic 2B(sub 2) structures of C3(+) has been investigated using large (5s3p2d1f) basis sets and multireference electron correlation treatments, including complete active space self consistent fields (CASSCF), multireference configuration interaction (MRCI), and averaged coupled-pair functional (ACPF) methods, as well as the single-reference quadratic configuration interaction (QCISD(T)) method. Our best estimate, including a correction for basis set incompleteness, is that the linear form lies above the cyclic from by 5.2(+1.5 to -1.0) kcal/mol. The 2 sigma(sup +, sub u) state is probably not a transition state, but a local minimum. Reliable computation of the cyclic/linear energy difference in C3(+) is extremely demanding of the electron correlation treatment used: of the single-reference methods previously considered, CCSD(T) and QCISD(T) perform best. The MRCI + Q(0.01)/(4s2p1d) energy separation of 1.68 kcal/mol should provide a comparison standard for other electron correlation methods applied to this system.

  16. Application of Statistic Experimental Design to Assess the Effect of Gammairradiation Pre-Treatment on the Drying Characteristics and Qualities of Wheat

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Wang, Jun

    Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.

  17. Resolution of identity approximation for the Coulomb term in molecular and periodic systems.

    PubMed

    Burow, Asbjörn M; Sierka, Marek; Mohamed, Fawzi

    2009-12-07

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 muhartree per atom, for both molecular and periodic systems.

  18. Resolution of identity approximation for the Coulomb term in molecular and periodic systems

    NASA Astrophysics Data System (ADS)

    Burow, Asbjörn M.; Sierka, Marek; Mohamed, Fawzi

    2009-12-01

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 μhartree per atom, for both molecular and periodic systems.

  19. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  20. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  1. On the basis set convergence of electron–electron entanglement measures: helium-like systems

    PubMed Central

    Hofer, Thomas S.

    2013-01-01

    A systematic investigation of three different electron–electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li+ and Be2+ using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one–electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li+ and Be2+. In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used. PMID:24790952

  2. On the basis set convergence of electron-electron entanglement measures: helium-like systems.

    PubMed

    Hofer, Thomas S

    2013-01-01

    A systematic investigation of three different electron-electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li(+) and Be(2+) using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one-electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li(+) and Be(2+). In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used.

  3. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  4. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    NASA Astrophysics Data System (ADS)

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-01

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.

  5. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  6. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  7. Ultrasonic characterization of the nonlinear elastic properties of unidirectional graphite/epoxy composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.

    1987-01-01

    The theoretical treatment of linear and nonlinear elasticity in a unidirectionally fiber reinforced composite as well as measurements for a unidirectional graphite/epoxy composite (T300/5208) are presented. Linear elastic properties were measured by both ultrasonic and strain gage measurements. The nonlinear properties were determined by measuring changes in ultrasonic natural phase velocity with a pulsed phase locked loop interferometer as a function of stress and temperature. These measurements provide the basis for further investigations into the relationship between nonlinear elastic properties and other important properties such as strength and fiber-matrix interfacial stength in graphite/epoxy composites.

  8. Carcinogenesis: alterations in reciprocal interactions of normal functional structure of biologic systems.

    PubMed

    Davydyan, Garri

    2015-12-01

    The evolution of biologic systems (BS) includes functional mechanisms that in some conditions may lead to the development of cancer. Using mathematical group theory and matrix analysis, previously, it was shown that normally functioning BS are steady functional structures regulated by three basis regulatory components: reciprocal links (RL), negative feedback (NFB) and positive feedback (PFB). Together, they form an integrative unit maintaining system's autonomy and functional stability. It is proposed that phylogenetic development of different species is implemented by the splitting of "rudimentary" characters into two relatively independent functional parts that become encoded in chromosomes. The functional correlate of splitting mechanisms is RL. Inversion of phylogenetic mechanisms during ontogenetic development leads cell differentiation until cells reach mature states. Deterioration of reciprocal structure in the genome during ontogenesis gives rise of pathological conditions characterized by unsteadiness of the system. Uncontrollable cell proliferation and invasive cell growth are the leading features of the functional outcomes of malfunctioning systems. The regulatory element responsible for these changes is RL. In matrix language, pathological regulation is represented by matrices having positive values of diagonal elements ( TrA  > 0) and also positive values of matrix determinant ( detA  > 0). Regulatory structures of that kind can be obtained if the negative entry of the matrix corresponding to RL is replaced with the positive one. To describe not only normal but also pathological states of BS, a unit matrix should be added to the basis matrices representing RL, NFB and PFB. A mathematical structure corresponding to the set of these four basis functional patterns (matrices) is a split quaternion (coquaternion). The structure and specific role of basis elements comprising four-dimensional linear space of split quaternions help to understand what changes in mechanism of cell differentiation may lead to cancer development.

  9. Multiuser receiver for DS-CDMA signals in multipath channels: an enhanced multisurface method.

    PubMed

    Mahendra, Chetan; Puthusserypady, Sadasivan

    2006-11-01

    This paper deals with the problem of multiuser detection in direct-sequence code-division multiple-access (DS-CDMA) systems in multipath environments. The existing multiuser detectors can be divided into two categories: (1) low-complexity poor-performance linear detectors and (2) high-complexity good-performance nonlinear detectors. In particular, in channels where the orthogonality of the code sequences is destroyed by multipath, detectors with linear complexity perform much worse than the nonlinear detectors. In this paper, we propose an enhanced multisurface method (EMSM) for multiuser detection in multipath channels. EMSM is an intermediate piecewise linear detection scheme with a run-time complexity linear in the number of users. Its bit error rate performance is compared with existing linear detectors, a nonlinear radial basis function detector trained by the new support vector learning algorithm, and Verdu's optimal detector. Simulations in multipath channels, for both synchronous and asynchronous cases, indicate that it always outperforms all other linear detectors, performing nearly as well as nonlinear detectors.

  10. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  11. Benchmarking of density functionals for a soft but accurate prediction and assignment of (1) H and (13)C NMR chemical shifts in organic and biological molecules.

    PubMed

    Benassi, Enrico

    2017-01-15

    A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Big geo data surface approximation using radial basis functions: A comparative study

    NASA Astrophysics Data System (ADS)

    Majdisova, Zuzana; Skala, Vaclav

    2017-12-01

    Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.

  13. Computational prediction of the pKas of small peptides through Conceptual DFT descriptors

    NASA Astrophysics Data System (ADS)

    Frau, Juan; Hernández-Haro, Noemí; Glossman-Mitnik, Daniel

    2017-03-01

    The experimental pKa of a group of simple amines have been plotted against several Conceptual DFT descriptors calculated by means of different density functionals, basis sets and solvation schemes. It was found that the best fits are those that relate the pKa of the amines with the global hardness η through the MN12SX density functional in connection with the Def2TZVP basis set and the SMD solvation model, using water as a solvent. The parameterized equation resulting from the linear regression analysis has then been used for the prediction of the pKa of small peptides of interest in the study of diabetes and Alzheimer disease. The accuracy of the results is relatively good, with a MAD of 0.36 units of pKa.

  14. Stochastic theory of polarized light in nonlinear birefringent media: An application to optical rotation

    NASA Astrophysics Data System (ADS)

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2018-05-01

    A stochastic theory is developed for the light transmitting the optical media exhibiting linear and nonlinear birefringence. The starting point is the two-component nonlinear Schrödinger equation (NLSE). On the basis of the ansatz of “soliton” solution for the NLSE, the evolution equation for the Stokes parameters is derived, which turns out to be the Langevin equation by taking account of randomness and dissipation inherent in the birefringent media. The Langevin equation is converted to the Fokker-Planck (FP) equation for the probability distribution by employing the technique of functional integral on the assumption of the Gaussian white noise for the random fluctuation. The specific application is considered for the optical rotation, which is described by the ellipticity (third component of the Stokes parameters) alone: (i) The asymptotic analysis is given for the functional integral, which leads to the transition rate on the Poincaré sphere. (ii) The FP equation is analyzed in the strong coupling approximation, by which the diffusive behavior is obtained for the linear and nonlinear birefringence. These would provide with a basis of statistical analysis for the polarization phenomena in nonlinear birefringent media.

  15. Wave scattering from random sets of closely spaced objects through linear embedding via Green's operators

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; de Hon, B. P.; Tijhuis, A. G.

    2011-08-01

    In this paper we present the application of linear embedding via Green's operators (LEGO) to the solution of the electromagnetic scattering from clusters of arbitrary (both conducting and penetrable) bodies randomly placed in a homogeneous background medium. In the LEGO method the objects are enclosed within simple-shaped bricks described in turn via scattering operators of equivalent surface current densities. Such operators have to be computed only once for a given frequency, and hence they can be re-used to perform the study of many distributions comprising the same objects located in different positions. The surface integral equations of LEGO are solved via the Moments Method combined with Adaptive Cross Approximation (to save memory) and Arnoldi basis functions (to compress the system). By means of purposefully selected numerical experiments we discuss the time requirements with respect to the geometry of a given distribution. Besides, we derive an approximate relationship between the (near-field) accuracy of the computed solution and the number of Arnoldi basis functions used to obtain it. This result endows LEGO with a handy practical criterion for both estimating the error and keeping it in check.

  16. Electroencephalography (EEG) forward modeling via H(div) finite element sources with focal interpolation.

    PubMed

    Pursiainen, S; Vorwerk, J; Wolters, C H

    2016-12-21

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.

  17. Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.

    2017-12-01

    A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.

  18. Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.

    2018-06-01

    A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2 D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.

  19. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    PubMed

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  20. Light scattering by lunar-like particle size distributions

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1991-01-01

    A fundamental input to models of light scattering from planetary regoliths is the mean phase function of the regolith particles. Using the known size distribution for typical lunar soils, the mean phase function and mean linear polarization for a regolith volume element of spherical particles of any composition were calculated from Mie theory. The two contour plots given here summarize the changes in the mean phase function and linear polarization with changes in the real part of the complex index of refraction, n - ik, for k equals 0.01, the visible wavelength 0.55 micrometers, and the particle size distribution of the typical mature lunar soil 72141. A second figure is a similar index-phase surface, except with k equals 0.1. The index-phase surfaces from this survey are a first order description of scattering by lunar-like regoliths of spherical particles of arbitrary composition. They form the basis of functions that span a large range of parameter-space.

  1. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  2. A Note on a Sampling Theorem for Functions over GF(q)n Domain

    NASA Astrophysics Data System (ADS)

    Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.

  3. Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.

    PubMed

    Corcoran, Timothy C

    2018-03-01

    In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.

  4. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  5. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-02-21

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  6. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-20

    We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  7. A function approximation approach to anomaly detection in propulsion system test data

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Hoyt, W. A.

    1993-01-01

    Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.

  8. A bifunctional amorphous polymer exhibiting equal linear and circular photoinduced birefringences.

    PubMed

    Royes, Jorge; Provenzano, Clementina; Pagliusi, Pasquale; Tejedor, Rosa M; Piñol, Milagros; Oriol, Luis

    2014-11-01

    The large and reversible photoinduced linear and circular birefringences in azo-compounds are at the basis of the interest in these materials, which are potentially useful for several applications. Since the onset of the linear and circular anisotropies relies on orientational processes, which typically occur on the molecular and supramolecular length scale, respectively, a circular birefringence at least one order of magnitude lower than the linear one is usually observed. Here, the synthesis and characterization of an amorphous polymer with a dimeric repeating unit containing a cyanoazobenzene and a cyanobiphenyl moiety are reported, in which identical optical linear and circular birefringences are induced for proper light dose and ellipticity. A pump-probe technique and an analytical method based on the Stokes-Mueller formalism are used to investigate the photoinduced effects and to evaluate the anisotropies. The peculiar photoresponse of the polymer makes it a good candidate for applications in smart functional devices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Decoupling control of a five-phase fault-tolerant permanent magnet motor by radial basis function neural network inverse

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Liu, Guohai; Xu, Dezhi; Xu, Liang; Xu, Gaohong; Aamir, Nazir

    2018-05-01

    This paper proposes a new decoupled control for a five-phase in-wheel fault-tolerant permanent magnet (IW-FTPM) motor drive, in which radial basis function neural network inverse (RBF-NNI) and internal model control (IMC) are combined. The RBF-NNI system is introduced into original system to construct a pseudo-linear system, and IMC is used as a robust controller. Hence, the newly proposed control system incorporates the merits of the IMC and RBF-NNI methods. In order to verify the proposed strategy, an IW-FTPM motor drive is designed based on dSPACE real-time control platform. Then, the experimental results are offered to verify that the d-axis current and the rotor speed are successfully decoupled. Besides, the proposed motor drive exhibits strong robustness even under load torque disturbance.

  10. Numerical Technique for Analyzing Rotating Rake Mode Measurements in a Duct With Passive Treatment and Shear Flow

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sutliff, Daniel L.

    2007-01-01

    A technique is presented for the analysis of measured data obtained from a rotating microphone rake system. The system is designed to measure the interaction modes of ducted fans. A Fourier analysis of the data from the rotating system results in a set of circumferential mode levels at each radial location of a microphone inside the duct. Radial basis functions are then least-squares fit to this data to obtain the radial mode amplitudes. For ducts with soft walls and mean flow, the radial basis functions must be numerically computed. The linear companion matrix method is used to obtain both the eigenvalues of interest, without an initial guess, and the radial basis functions. The governing equations allow for the mean flow to have a boundary layer at the wall. In addition, a nonlinear least-squares method is used to adjust the wall impedance to best fit the data in an attempt to use the rotating system as an in-duct wall impedance measurement tool. Simulated and measured data are used to show the effects of wall impedance and mean flow on the computed results.

  11. Functional helicoidal model of DNA molecule with elastic nonlinearity

    NASA Astrophysics Data System (ADS)

    Tseytlin, Y. M.

    2013-06-01

    We constructed a functional DNA molecule model on the basis of a flexible helicoidal sensor, specifically, a pretwisted hollow nano-strip. We study in this article the helicoidal nano- sensor model with a pretwisted strip axial extension corresponding to the overstretching transition of DNA from dsDNA to ssDNA. Our model and the DNA molecule have similar geometrical and nonlinear mechanical features unlike models based on an elastic rod, accordion bellows, or an imaginary combination of "multiple soft and hard linear springs", presented in some recent publications.

  12. Particle-based and meshless methods with Aboria

    NASA Astrophysics Data System (ADS)

    Robinson, Martin; Bruna, Maria

    Aboria is a powerful and flexible C++ library for the implementation of particle-based numerical methods. The particles in such methods can represent actual particles (e.g. Molecular Dynamics) or abstract particles used to discretise a continuous function over a domain (e.g. Radial Basis Functions). Aboria provides a particle container, compatible with the Standard Template Library, spatial search data structures, and a Domain Specific Language to specify non-linear operators on the particle set. This paper gives an overview of Aboria's design, an example of use, and a performance benchmark.

  13. NMR shieldings from density functional perturbation theory: GIPAW versus all-electron calculations

    NASA Astrophysics Data System (ADS)

    de Wijs, G. A.; Laskowski, R.; Blaha, P.; Havenith, R. W. A.; Kresse, G.; Marsman, M.

    2017-02-01

    We present a benchmark of the density functional linear response calculation of NMR shieldings within the gauge-including projector-augmented-wave method against all-electron augmented-plane-wave+local-orbital and uncontracted Gaussian basis set results for NMR shieldings in molecular and solid state systems. In general, excellent agreement between the aforementioned methods is obtained. Scalar relativistic effects are shown to be quite large for nuclei in molecules in the deshielded limit. The small component makes up a substantial part of the relativistic corrections.

  14. NMR shieldings from density functional perturbation theory: GIPAW versus all-electron calculations.

    PubMed

    de Wijs, G A; Laskowski, R; Blaha, P; Havenith, R W A; Kresse, G; Marsman, M

    2017-02-14

    We present a benchmark of the density functional linear response calculation of NMR shieldings within the gauge-including projector-augmented-wave method against all-electron augmented-plane-wave+local-orbital and uncontracted Gaussian basis set results for NMR shieldings in molecular and solid state systems. In general, excellent agreement between the aforementioned methods is obtained. Scalar relativistic effects are shown to be quite large for nuclei in molecules in the deshielded limit. The small component makes up a substantial part of the relativistic corrections.

  15. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  16. Weak solution concept and Galerkin's matrix for the exterior of an oblate ellipsoid of revolution in the representation of the Earth's gravity potential by buried masses

    NASA Astrophysics Data System (ADS)

    Holota, Petr; Nesvadba, Otakar

    2017-04-01

    The paper is motivated by the role of boundary value problems in Earth's gravity field studies. The discussion focuses on Neumann's problem formulated for the exterior of an oblate ellipsoid of revolution as this is considered a basis for an iteration solution of the linear gravimetric boundary value problem in the determination of the disturbing potential. The approach follows the concept of the weak solution and Galerkin's approximations are applied. This means that the solution of the problem is approximated by linear combinations of basis functions with scalar coefficients. The construction of Galerkin's matrix for basis functions generated by elementary potentials (point masses) is discussed. Ellipsoidal harmonics are used as a natural tool and the elementary potentials are expressed by means of series of ellipsoidal harmonics. The problem, however, is the summation of the series that represent the entries of Galerkin's matrix. It is difficult to reduce the number of summation indices since in the ellipsoidal case there is no analogue to the addition theorem known for spherical harmonics. Therefore, the straightforward application of series of ellipsoidal harmonics is complemented by deeper relations contained in the theory of ordinary differential equations of second order and in the theory of Legendre's functions. Subsequently, also hypergeometric functions and series are used. Moreover, within some approximations the entries are split into parts. Some of the resulting series may be summed relatively easily, apart from technical tricks. For the remaining series the summation was converted to elliptic integrals. The approach made it possible to deduce a closed (though approximate) form representation of the entries in Galerkin's matrix. The result rests on concepts and methods of mathematical analysis. In the paper it is confronted with a direct numerical approach applied for the implementation of Legendre's functions. The computation of the entries is more demanding in this case, but conceptually it avoids approximations. Finally, some specific features associated with function bases generated by elementary potentials in case the ellipsoidal solution domain are illustrated and discussed.

  17. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  18. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  19. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.

  20. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    NASA Astrophysics Data System (ADS)

    Wyszkowska, Patrycja

    2017-12-01

    The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  1. Polarized atomic orbitals for self-consistent field electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Head-Gordon, Martin

    1997-12-01

    We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.

  2. Vertical spatial coherence model for a transient signal forward-scattered from the sea surface

    USGS Publications Warehouse

    Yoerger, E.J.; McDaniel, S.T.

    1996-01-01

    The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.

  3. The large-scale three-point correlation function of the SDSS BOSS DR12 CMASS galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.; Beutler, Florian; Chuang, Chia-Hsun; Cuesta, Antonio J.; Ge, Jian; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McBride, Cameron K.; Nichol, Robert C.; Percival, Will J.; Rodríguez-Torres, Sergio; Ross, Ashley J.; Scoccimarro, Román; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana

    2017-06-01

    We report a measurement of the large-scale three-point correlation function of galaxies using the largest data set for this purpose to date, 777 202 luminous red galaxies in the Sloan Digital Sky Survey Baryon Acoustic Oscillation Spectroscopic Survey (SDSS BOSS) DR12 CMASS sample. This work exploits the novel algorithm of Slepian & Eisenstein to compute the multipole moments of the 3PCF in O(N^2) time, with N the number of galaxies. Leading-order perturbation theory models the data well in a compressed basis where one triangle side is integrated out. We also present an accurate and computationally efficient means of estimating the covariance matrix. With these techniques, the redshift-space linear and non-linear bias are measured, with 2.6 per cent precision on the former if σ8 is fixed. The data also indicate a 2.8σ preference for the BAO, confirming the presence of BAO in the three-point function.

  4. Plausibility assessment of a 2-state self-paced mental task-based BCI using the no-control performance analysis.

    PubMed

    Faradji, Farhad; Ward, Rabab K; Birch, Gary E

    2009-06-15

    The feasibility of having a self-paced brain-computer interface (BCI) based on mental tasks is investigated. The EEG signals of four subjects performing five mental tasks each are used in the design of a 2-state self-paced BCI. The output of the BCI should only be activated when the subject performs a specific mental task and should remain inactive otherwise. For each subject and each task, the feature coefficient and the classifier that yield the best performance are selected, using the autoregressive coefficients as the features. The classifier with a zero false positive rate and the highest true positive rate is selected as the best classifier. The classifiers tested include: linear discriminant analysis, quadratic discriminant analysis, Mahalanobis discriminant analysis, support vector machine, and radial basis function neural network. The results show that: (1) some classifiers obtained the desired zero false positive rate; (2) the linear discriminant analysis classifier does not yield acceptable performance; (3) the quadratic discriminant analysis classifier outperforms the Mahalanobis discriminant analysis classifier and performs almost as well as the radial basis function neural network; and (4) the support vector machine classifier has the highest true positive rates but unfortunately has nonzero false positive rates in most cases.

  5. Encoding of head acceleration in vestibular neurons. I. Spatiotemporal response properties to linear acceleration

    NASA Technical Reports Server (NTRS)

    Bush, G. A.; Perachio, A. A.; Angelaki, D. E.

    1993-01-01

    1. Extracellular recordings were made in and around the medial vestibular nuclei in decerebrated rats. Neurons were functionally identified according to their semicircular canal input on the basis of their responses to angular head rotations around the yaw, pitch, and roll head axes. Those cells responding to angular acceleration were classified as either horizontal semicircular canal-related (HC) or vertical semicircular canal-related (VC) neurons. The HC neurons were further characterized as either type I or type II, depending on the direction of rotation producing excitation. Cells that lacked a response to angular head acceleration, but exhibited sensitivity to a change in head position, were classified as purely otolith organ-related (OTO) neurons. All vestibular neurons were then tested for their response to sinusoidal linear translation in the horizontal head plane. 2. Convergence of macular and canal inputs onto central vestibular nuclei neurons occurred in 73% of the type I HC, 79% of the type II HC, and 86% of the VC neurons. Out of the 223 neurons identified as receiving macular input, 94 neurons were further studied, and their spatiotemporal response properties to sinusoidal stimulation with pure linear acceleration were quantified. Data were obtained from 33 type I HC, 22 type II HC, 22 VC, and 17 OTO neurons. 3. For each neuron the angle of the translational stimulus vector was varied by 15, 30, or 45 degrees increments in the horizontal head plane. In all tested neurons, a direction of maximum sensitivity was identified. An interesting difference among neurons was their response to translation along the direction perpendicular to that that produced the maximum response ("null" direction). For the majority of neurons tested, it was possible to evoke a nonzero response during stimulation along the null direction always had response phases that varied as a function of stimulus direction. 4. These spatiotemporal response properties were quantified in two independent ways. First, the data were evaluated on the basis of the traditional one-dimensional principle governed by the "cosine gain rule" and constant response phase at different stimulus orientations. Second, the response gain and phase values that were empirically determined for each orientation of the applied linear stimulus vector were fitted on the basis of a newly developed formalism that treats neuronal responses as exhibiting two-dimensional spatial sensitivity. Thus two response vectors were determined for each neuron on the basis of its response gain and phase at different stimulus directions in the horizontal head plane.(ABSTRACT TRUNCATED AT 400 WORDS).

  6. Basis material decomposition method for material discrimination with a new spectrometric X-ray imaging detector

    NASA Astrophysics Data System (ADS)

    Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.

    2017-08-01

    Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.

  7. A rational model of function learning.

    PubMed

    Lucas, Christopher G; Griffiths, Thomas L; Williams, Joseph J; Kalish, Michael L

    2015-10-01

    Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.

  8. Electronic and spectroscopic characterizations of SNP isomers

    NASA Astrophysics Data System (ADS)

    Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.

    2018-02-01

    High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.

  9. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  10. Electric transition dipole moment in pre-Born-Oppenheimer molecular structure theory.

    PubMed

    Simmen, Benjamin; Mátyus, Edit; Reiher, Markus

    2014-10-21

    This paper presents the calculation of the electric transition dipole moment in a pre-Born-Oppenheimer framework. Electrons and nuclei are treated equally in terms of the parametrization of the non-relativistic total wave function, which is written as a linear combination of basis functions constructed from explicitly correlated Gaussian functions and the global vector representation. The integrals of the electric transition dipole moment are derived corresponding to these basis functions in both the length and the velocity representation. The calculations are performed in laboratory-fixed Cartesian coordinates without relying on coordinates which separate the center of mass from the translationally invariant degrees of freedom. The effect of the overall motion is eliminated through translationally invariant integral expressions. The electric transition dipole moment is calculated between two rovibronic levels of the H2 molecule assignable to the lowest rovibrational states of the X (1)Σ(g)(+) and B (1)Σ(u)(+) electronic states in the clamped-nuclei framework. This is the first evaluation of this quantity in a full quantum mechanical treatment without relying on the Born-Oppenheimer approximation.

  11. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  12. Performance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies.

    PubMed

    Jacquemin, Denis; Moore, Barry; Planchat, Aurélien; Adamo, Carlo; Autschbach, Jochen

    2014-04-08

    Using a set of 40 conjugated molecules, we assess the performance of an "optimally tuned" range-separated hybrid functional in reproducing the experimental 0-0 energies. The selected protocol accounts for the impact of solvation using a corrected linear-response continuum approach and vibrational corrections through calculations of the zero-point energies of both ground and excited-states and provides basis set converged data thanks to the systematic use of diffuse-containing atomic basis sets at all computational steps. It turns out that an optimally tuned long-range corrected hybrid form of the Perdew-Burke-Ernzerhof functional, LC-PBE*, delivers both the smallest mean absolute error (0.20 eV) and standard deviation (0.15 eV) of all tested approaches, while the obtained correlation (0.93) is large but remains slightly smaller than its M06-2X counterpart (0.95). In addition, the efficiency of two other recently developed exchange-correlation functionals, namely SOGGA11-X and ωB97X-D, has been determined in order to allow more complete comparisons with previously published data.

  13. Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.

    PubMed

    Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto

    2010-05-01

    A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.

  14. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  15. Adsorption of Emerging Munitions Contaminants on Cellulose Surface: A Combined Theoretical and Experimental Investigation.

    PubMed

    Shukla, Manoj K; Poda, Aimee

    2016-06-01

    This manuscript reports results of an integrated theoretical and experimental investigation of adsorption of two emerging contaminants (DNAN and FOX-7) and legacy compound TNT on cellulose surface. Cellulose was modeled as trimeric form of the linear chain of 1 → 4 linked of β-D-glucopyranos in (4)C1 chair conformation. Geometries of modeled cellulose, munitions compounds and their complexes were optimized at the M06-2X functional level of Density Functional Theory using the 6-31G(d,p) basis set in gas phase and in water solution. The effect of water solution was modeled using the CPCM approach. Nature of potential energy surfaces was ascertained through harmonic vibrational frequency analysis. Interaction energies were corrected for basis set superposition error and the 6-311G(d,p) basis set was used. Molecular electrostatic potential mapping was performed to understand the reactivity of the investigated systems. It was predicted that adsorbates will be weakly adsorbed on the cellulose surface in water solution than in the gas phase.

  16. Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.

    2006-05-01

    In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.

  17. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  18. Discriminative analysis of early Alzheimer's disease based on two intrinsically anti-correlated networks with resting-state fMRI.

    PubMed

    Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening

    2006-01-01

    In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.

  19. Some problems in applications of the linear variational method

    NASA Astrophysics Data System (ADS)

    Pupyshev, Vladimir I.; Montgomery, H. E.

    2015-09-01

    The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Abild-Pedersen, Frank

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  1. Coupled-cluster and density functional theory studies of the electronic 0-0 transitions of the DNA bases.

    PubMed

    Ovchinnikov, Vasily A; Sundholm, Dage

    2014-04-21

    The 0-0 transitions of the electronic excitation spectra of the lowest tautomers of the four nucleotide (DNA) bases have been studied using linear-response approximate coupled-cluster singles and doubles (CC2) calculations. Excitation energies have also been calculated at the linear-response time-dependent density functional theory (TDDFT) level using the B3LYP functional. Large basis sets have been employed for ensuring that the obtained excitation energies are close to the basis-set limit. Zero-point vibrational energy corrections have been calculated at the B3LYP and CC2 levels for the ground and excited states rendering direct comparisons with high-precision spectroscopy measurements feasible. The obtained excitation energies for the 0-0 transitions of the first excited states of guanine tautomers are in good agreement with experimental values confirming the experimental assignment of the energetic order of the tautomers of the DNA bases. For the experimentally detected guanine tautomers, the first excited state corresponds to a π→π* transition, whereas for the tautomers of adenine, thymine, and the lowest tautomer of cytosine the transition to the first excited state has n →π* character. The calculations suggest that the 0-0 transitions of adenine, thymine, and cytosine are not observed in the absorption spectrum due to the weak oscillator strength of the formally symmetry-forbidden transitions, while 0-0 transitions of thymine have been detected in fluorescence excitation spectra.

  2. Stable orthogonal local discriminant embedding for linear dimensionality reduction.

    PubMed

    Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin

    2013-07-01

    Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.

  3. Demonstration of Detection and Ranging Using Solvable Chaos

    NASA Technical Reports Server (NTRS)

    Corron, Ned J.; Stahl, Mark T.; Blakely, Jonathan N.

    2013-01-01

    Acoustic experiments demonstrate a novel approach to ranging and detection that exploits the properties of a solvable chaotic oscillator. This nonlinear oscillator includes an ordinary differential equation and a discrete switching condition. The chaotic waveform generated by this hybrid system is used as the transmitted waveform. The oscillator admits an exact analytic solution that can be written as the linear convolution of binary symbols and a single basis function. This linear representation enables coherent reception using a simple analog matched filter and without need for digital sampling or signal processing. An audio frequency implementation of the transmitter and receiver is described. Successful acoustic ranging measurements are presented to demonstrate the viability of the approach.

  4. Estimation of Δ R/ R values by benchmark study of the Mössbauer Isomer shifts for Ru, Os complexes using relativistic DFT calculations

    NASA Astrophysics Data System (ADS)

    Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru

    2017-11-01

    The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.

  5. Time-dependent density functional theory description of total photoabsorption cross sections

    NASA Astrophysics Data System (ADS)

    Tenorio, Bruno Nunes Cabral; Nascimento, Marco Antonio Chaer; Rocha, Alexandre Braga

    2018-02-01

    The time-dependent version of the density functional theory (TDDFT) has been used to calculate the total photoabsorption cross section of a number of molecules, namely, benzene, pyridine, furan, pyrrole, thiophene, phenol, naphthalene, and anthracene. The discrete electronic pseudo-spectra, obtained in a L2 basis set calculation were used in an analytic continuation procedure to obtain the photoabsorption cross sections. The ammonia molecule was chosen as a model system to compare the results obtained with TDDFT to those obtained with the linear response coupled cluster approach in order to make a link with our previous work and establish benchmarks.

  6. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  7. Energy-switching potential energy surface for the water molecule revisited: A highly accurate singled-sheeted form.

    PubMed

    Galvão, B R L; Rodrigues, S P J; Varandas, A J C

    2008-07-28

    A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.

  8. Electronic Structure Methods Based on Density Functional Theory

    DTIC Science & Technology

    2010-01-01

    0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves

  9. From master slave interferometry to complex master slave interferometry: theoretical work

    NASA Astrophysics Data System (ADS)

    Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian

    2018-03-01

    A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.

  10. Curvature and frontier orbital energies in density functional theory

    NASA Astrophysics Data System (ADS)

    Kronik, Leeor; Stein, Tamar; Autschbach, Jochen; Govind, Niranjan; Baer, Roi

    2013-03-01

    Perdew et al. [Phys. Rev. Lett 49, 1691 (1982)] discovered and proved two different properties of exact Kohn-Sham density functional theory (DFT): (i) The exact total energy versus particle number is a series of linear segments between integer electron points; (ii) Across an integer number of electrons, the exchange-correlation potential may ``jump'' by a constant, known as the derivative discontinuity (DD). Here, we show analytically that in both the original and the generalized Kohn-Sham formulation of DFT, the two are in fact two sides of the same coin. Absence of a derivative discontinuity necessitates deviation from piecewise linearity, and the latter can be used to correct for the former, thereby restoring the physical meaning of the orbital energies. Using selected small molecules, we show that this results in a simple correction scheme for any underlying functional, including semi-local and hybrid functionals as well as Hartree-Fock theory, suggesting a practical correction for the infamous gap problem of DFT. Moreover, we show that optimally-tuned range-separated hybrid functionals can inherently minimize both DD and curvature, thus requiring no correction, and show that this can be used as a sound theoretical basis for novel tuning strategies.

  11. Orthogonal sparse linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun

    2018-03-01

    Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.

  12. Discrete quasi-linear viscoelastic damping analysis of connective tissues, and the biomechanics of stretching.

    PubMed

    Babaei, Behzad; Velasquez-Mao, Aaron J; Thomopoulos, Stavros; Elson, Elliot L; Abramowitch, Steven D; Genin, Guy M

    2017-05-01

    The time- and frequency-dependent properties of connective tissue define their physiological function, but are notoriously difficult to characterize. Well-established tools such as linear viscoelasticity and the Fung quasi-linear viscoelastic (QLV) model impose forms on responses that can mask true tissue behavior. Here, we applied a more general discrete quasi-linear viscoelastic (DQLV) model to identify the static and dynamic time- and frequency-dependent behavior of rabbit medial collateral ligaments. Unlike the Fung QLV approach, the DQLV approach revealed that energy dissipation is elevated at a loading period of ∼10s. The fitting algorithm was applied to the entire loading history on each specimen, enabling accurate estimation of the material's viscoelastic relaxation spectrum from data gathered from transient rather than only steady states. The application of the DQLV method to cyclically loading regimens has broad applicability for the characterization of biological tissues, and the results suggest a mechanistic basis for the stretching regimens most favored by athletic trainers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Discrete quasi-linear viscoelastic damping analysis of connective tissues, and the biomechanics of stretching

    PubMed Central

    Babaei, Behzad; Velasquez-Mao, Aaron J.; Thomopoulos, Stavros; Elson, Elliot L.; Abramowitch, Steven D.; Genin, Guy M.

    2017-01-01

    The time- and frequency-dependent properties of connective tissue define their physiological function, but are notoriously difficult to characterize. Well-established tools such as linear viscoelasticity and the Fung quasi-linear viscoelastic (QLV) model impose forms on responses that can mask true tissue behavior. Here, we applied a more general discrete quasi-linear viscoelastic (DQLV) model to identify the static and dynamic time- and frequency-dependent behavior of rabbit medial collateral ligaments. Unlike the Fung QLV approach, the DQLV approach revealed that energy dissipation is elevated at a loading period of ~10 seconds. The fitting algorithm was applied to the entire loading history on each specimen, enabling accurate estimation of the material's viscoelastic relaxation spectrum from data gathered from transient rather than only steady states. The application of the DQLV method to cyclically loading regimens has broad applicability for the characterization of biological tissues, and the results suggest a mechanistic basis for the stretching regimens most favored by athletic trainers. PMID:28088071

  14. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  15. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  16. ωB97X-V: A 10-parameter, range-separated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2013-12-18

    A 10-parameter, range-separated hybrid (RSH), generalized gradient approximation (GGA) density functional with nonlocal correlation (VV10) is presented in this paper. Instead of truncating the B97-type power series inhomogeneity correction factors (ICF) for the exchange, same-spin correlation, and opposite-spin correlation functionals uniformly, all 16 383 combinations of the linear parameters up to fourth order (m = 4) are considered. These functionals are individually fit to a training set and the resulting parameters are validated on a primary test set in order to identify the 3 optimal ICF expansions. Through this procedure, it is discovered that the functional that performs best onmore » the training and primary test sets has 7 linear parameters, with 3 additional nonlinear parameters from range-separation and nonlocal correlation. The resulting density functional, ωB97X-V, is further assessed on a secondary test set, the parallel-displaced coronene dimer, as well as several geometry datasets. Finally and furthermore, the basis set dependence and integration grid sensitivity of ωB97X-V are analyzed and documented in order to facilitate the use of the functional.« less

  17. Lump solutions to nonlinear partial differential equations via Hirota bilinear forms

    NASA Astrophysics Data System (ADS)

    Ma, Wen-Xiu; Zhou, Yuan

    2018-02-01

    Lump solutions are analytical rational function solutions localized in all directions in space. We analyze a class of lump solutions, generated from quadratic functions, to nonlinear partial differential equations. The basis of success is the Hirota bilinear formulation and the primary object is the class of positive multivariate quadratic functions. A complete determination of quadratic functions positive in space and time is given, and positive quadratic functions are characterized as sums of squares of linear functions. Necessary and sufficient conditions for positive quadratic functions to solve Hirota bilinear equations are presented, and such polynomial solutions yield lump solutions to nonlinear partial differential equations under the dependent variable transformations u = 2(ln ⁡ f) x and u = 2(ln ⁡ f) xx, where x is one spatial variable. Applications are made for a few generalized KP and BKP equations.

  18. Relaxation of Actinide Surfaces: An All Electron Study

    NASA Astrophysics Data System (ADS)

    Atta-Fynn, Raymond; Dholabhai, Pratik; Ray, Asok

    2006-10-01

    Fully relativistic full potential density functional calculations with a linearized augmented plane wave plus local orbitals basis (LAPW + lo) have been performed to investigate the relaxations of heavy actinide surfaces, namely the (111) surface of fcc δ-Pu and the (0001) surface of dhcp Am using WIEN2k. This code uses the LAPW + lo method with the unit cell divided into non-overlapping atom-centered spheres and an interstitial region. The APW+lo basis is used to describe all s, p, d, and f states and LAPW basis to describe all higher angular momentum states. Each surface was modeled by a three-layer periodic slab separated by 60 Bohr vacuum with four atoms per surface unit cell. In general, we have found a contraction of the interlayer separations for both Pu and Am. We will report, in detail, the electronic and geometric structures of the relaxed surfaces and comparisons with the respective non-relaxed surfaces.

  19. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  20. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  1. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields.

    PubMed

    Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J

    2018-01-30

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  2. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields

    NASA Astrophysics Data System (ADS)

    Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-02-01

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  3. Exchange field effect in the crystal-field ground state of Ce M Al 4 Si 2

    DOE PAGES

    Chen, K.; Strigari, F.; Sundermann, M.; ...

    2016-09-06

    The crystal-field ground-state wave functions of the tetragonal, magnetically ordering Kondo lattice materials CeMAl 4Si 2 (M = Rh, Ir, and Pt) are determined in this paper with low-temperature linearly polarized soft-x-ray absorption spectroscopy, and estimates for the crystal-field splittings are given from the temperature evolution of the linear dichroism. Values for the dominant exchange field in the magnetically ordered phases can be obtained from fitting the influence of magnetic order on the linear dichroism. The direction of the required exchange field is || c for the antiferromagnetic Rh and Ir compounds, with the corresponding strength of the order ofmore » λ ex ≈ 6 meV (65 K). Finally and furthermore, the presence of Kondo screening in the Rh and Ir compound is demonstrated on the basis of the absorption due to f 0 in the initial state.« less

  4. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  5. Cone photoreceptor sensitivities and unique hue chromatic responses: correlation and causation imply the physiological basis of unique hues.

    PubMed

    Pridmore, Ralph W

    2013-01-01

    This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors) but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral) functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure) hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique) over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95-1.0) in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones.) Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision.

  6. Cone Photoreceptor Sensitivities and Unique Hue Chromatic Responses: Correlation and Causation Imply the Physiological Basis of Unique Hues

    PubMed Central

    Pridmore, Ralph W.

    2013-01-01

    This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors) but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral) functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure) hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique) over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95–1.0) in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones.) Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision. PMID:24204755

  7. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  8. Association between the Type of Workplace and Lung Function in Copper Miners

    PubMed Central

    Gruszczyński, Leszek; Wojakowska, Anna; Ścieszka, Marek; Turczyn, Barbara; Schmidt, Edward

    2016-01-01

    The aim of the analysis was to retrospectively assess changes in lung function in copper miners depending on the type of workplace. In the groups of 225 operators, 188 welders, and 475 representatives of other jobs, spirometry was performed at the start of employment and subsequently after 10, 20, and 25 years of work. Spirometry Longitudinal Data Analysis software was used to estimate changes in group means for FEV1 and FVC. Multiple linear regression analysis was used to assess an association between workplace and lung function. Lung function assessed on the basis of calculation of longitudinal FEV1 (FVC) decline was similar in all studied groups. However, multiple linear regression model used in cross-sectional analysis revealed an association between workplace and lung function. In the group of welders, FEF75 was lower in comparison to operators and other miners as early as after 10 years of work. Simultaneously, in smoking welders, the FEV1/FVC ratio was lower than in nonsmokers (p < 0,05). The interactions between type of workplace and smoking (p < 0,05) in their effect on FVC, FEV1, PEF, and FEF50 were shown. Among underground working copper miners, the group of smoking welders is especially threatened by impairment of lung ventilatory function. PMID:27274987

  9. Basis set and electron correlation effects on the polarizability and second hyperpolarizability of model open-shell π-conjugated systems

    NASA Astrophysics Data System (ADS)

    Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi

    2005-03-01

    The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.

  10. Color constancy: enhancing von Kries adaption via sensor transformations

    NASA Astrophysics Data System (ADS)

    Finlayson, Graham D.; Drew, Mark S.; Funt, Brian V.

    1993-09-01

    Von Kries adaptation has long been considered a reasonable vehicle for color constancy. Since the color constancy performance attainable via the von Kries rule strongly depends on the spectral response characteristics of the human cones, we consider the possibility of enhancing von Kries performance by constructing new `sensors' as linear combinations of the fixed cone sensitivity functions. We show that if surface reflectances are well-modeled by 3 basis functions and illuminants by 2 basis functions then there exists a set of new sensors for which von Kries adaptation can yield perfect color constancy. These new sensors can (like the cones) be described as long-, medium-, and short-wave sensitive; however, both the new long- and medium-wave sensors have sharpened sensitivities -- their support is more concentrated. The new short-wave sensor remains relatively unchanged. A similar sharpening of cone sensitivities has previously been observed in test and field spectral sensitivities measured for the human eye. We present simulation results demonstrating improved von Kries performance using the new sensors even when the restrictions on the illumination and reflectance are relaxed.

  11. Geometric Mechanics for Continuous Swimmers on Granular Material

    NASA Astrophysics Data System (ADS)

    Dai, Jin; Faraji, Hossein; Schiebel, Perrin; Gong, Chaohui; Travers, Matthew; Hatton, Ross; Goldman, Daniel; Choset, Howie; Biorobotics Lab Collaboration; LaboratoryRobotics; Applied Mechanics (LRAM) Collaboration; Complex Rheology; Biomechanics Lab Collaboration

    Animal experiments have shown that Chionactis occipitalis(N =10) effectively undulating on granular substrates exhibits a particular set of waveforms which can be approximated by a sinusoidal variation in curvature, i.e., a serpenoid wave. Furthermore, all snakes tested used a narrow subset of all available waveform parameters, measured as the relative curvature equal to 5.0+/-0.3, and number of waves on the body equal to1.8+/-0.1. We hypothesize that the serpenoid wave of a particular choice of parameters offers distinct benefit for locomotion on granular material. To test this hypothesis, we used a physical model (snake robot) to empirically explore the space of serpenoid motions, which is linearly spanned with two independent continuous serpenoid basis functions. The empirically derived height function map, which is a geometric mechanics tool for analyzing movements of cyclic gaits, showed that displacement per gait cycle increases with amplitude at small amplitudes, but reaches a peak value of 0.55 body-lengths at relative curvature equal to 6.0. This work signifies that with shape basis functions, geometric mechanics tools can be extended for continuous swimmers.

  12. Constraint-Based Abstract Semantics for Temporal Logic: A Direct Approach to Design and Implementation

    NASA Astrophysics Data System (ADS)

    Banda, Gourinath; Gallagher, John P.

    interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.

  13. Should the SCOPA-COG be modified? A Rasch analysis perspective.

    PubMed

    Forjaz, M J; Frades-Payo, B; Rodriguez-Blazquez, C; Ayala, A; Martinez-Martin, P

    2010-02-01

    The SCales for Outcomes in PArkinson's disease-Cognition (SCOPA-COG) is a specific measure of cognitive function for Parkinson's disease (PD) patients. Previous studies, under the frame of the classic test theory, indicate satisfactory psychometric properties. The Rasch model, an item response theory approach, provides new information about the scale, as well as results in a linear scale. This study aims at analysing the SCOPA-COG according to the Rasch model and, on the basis of results, suggesting modification to the SCOPA-COG. Fit to the Rasch model was analysed using a sample of 384 PD patients. A good fit was obtained after rescoring for disordered thresholds. The person separation index, a reliability measure, was 0.83. Differential item functioning was observed by age for three items and by gender for one item. The SCOPA-COG is a unidimensional measure of global cognitive function in PD patients, with good scale targeting and no empirical evidence for use of the subscale scores. Its adequate reliability and internal construct validity were supported. The SCOPA-COG, with the proposed scoring scheme, generates true linear interval scores.

  14. A numerical technique for linear elliptic partial differential equations in polygonal domains.

    PubMed

    Hashemzadeh, P; Fokas, A S; Smitheman, S A

    2015-03-08

    Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.

  15. Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ku, R. T.

    1972-01-01

    The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.

  16. Variable structure control of nonlinear systems through simplified uncertain models

    NASA Technical Reports Server (NTRS)

    Sira-Ramirez, Hebertt

    1986-01-01

    A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.

  17. Neural network approach to quantum-chemistry data: accurate prediction of density functional theory energies.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2009-08-21

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  18. Triphenylamine-based fluorescent NLO phores with ICT characteristics: Solvatochromic and theoretical study

    NASA Astrophysics Data System (ADS)

    Katariya, Santosh B.; Patil, Dinesh; Rhyman, Lydia; Alswaidan, Ibrahim A.; Ramasami, Ponnadurai; Sekar, Nagaiyan

    2017-12-01

    The static first and second hyperpolarizability and their related properties were calculated for triphenylamine-based "push-pull" dyes using the B3LYP, CAM-B3LYP and BHHLYP functionals in conjunction with the 6-311+G(d,p) basis set. The electronic coupling for the electron transfer reaction of the dyes were calculated with the generalized Mulliken-Hush method. The results obtained were correlated with the polarizability parameter αCT , first hyperpolarizability parameter βCT, and the solvatochromic descriptor of 〈 γ〉 SD obtained by the solvatochromic method. The dyes studied show a high total first order hyperpolarizability (70-238 times) and second order hyperpolarizability (412-778 times) compared to urea. Among the three functionals, the CAM-B3LYP and BHHLYP functionals show hyperpolarizability values closer to experimental values. Experimental absorption and emission wavelengths measured for all the synthesized dyes are in good agreement with those predicted using the time-dependent density functional theory. The theoretical examination on non-linear optical properties was performed on the key parameters of polarizability and hyperpolarizability. A remarkable increase in non-linear optical response is observed on insertion of benzothiazole unit compared to benzimidazole unit.

  19. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  20. Generalized Functions for the Fractional Calculus

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    1999-01-01

    Previous papers have used two important functions for the solution of fractional order differential equations, the Mittag-Leffler functionE(sub q)[at(exp q)](1903a, 1903b, 1905), and the F-function F(sub q)[a,t] of Hartley & Lorenzo (1998). These functions provided direct solution and important understanding for the fundamental linear fractional order differential equation and for the related initial value problem (Hartley and Lorenzo, 1999). This paper examines related functions and their Laplace transforms. Presented for consideration are two generalized functions, the R-function and the G-function, useful in analysis and as a basis for computation in the fractional calculus. The R-function is unique in that it contains all of the derivatives and integrals of the F-function. The R-function also returns itself on qth order differ-integration. An example application of the R-function is provided. A further generalization of the R-function, called the G-function brings in the effects of repeated and partially repeated fractional poles.

  1. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  2. Interaction Models for Functional Regression.

    PubMed

    Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab

    2016-02-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.

  3. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  4. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  5. Rapid iterative reanalysis for automated design

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.

    1973-01-01

    A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.

  6. Differential Dynamic Engagement within 24 SH3 Domain: Peptide Complexes Revealed by Co-Linear Chemical Shift Perturbation Analysis

    PubMed Central

    Stollar, Elliott J.; Lin, Hong; Davidson, Alan R.; Forman-Kay, Julie D.

    2012-01-01

    There is increasing evidence for the functional importance of multiple dynamically populated states within single proteins. However, peptide binding by protein-protein interaction domains, such as the SH3 domain, has generally been considered to involve the full engagement of peptide to the binding surface with minimal dynamics and simple methods to determine dynamics at the binding surface for multiple related complexes have not been described. We have used NMR spectroscopy combined with isothermal titration calorimetry to comprehensively examine the extent of engagement to the yeast Abp1p SH3 domain for 24 different peptides. Over one quarter of the domain residues display co-linear chemical shift perturbation (CCSP) behavior, in which the position of a given chemical shift in a complex is co-linear with the same chemical shift in the other complexes, providing evidence that each complex exists as a unique dynamic rapidly inter-converting ensemble. The extent the specificity determining sub-surface of AbpSH3 is engaged as judged by CCSP analysis correlates with structural and thermodynamic measurements as well as with functional data, revealing the basis for significant structural and functional diversity amongst the related complexes. Thus, CCSP analysis can distinguish peptide complexes that may appear identical in terms of general structure and percent peptide occupancy but have significant local binding differences across the interface, affecting their ability to transmit conformational change across the domain and resulting in functional differences. PMID:23251481

  7. Utilization of group theory in studies of molecular clusters

    NASA Astrophysics Data System (ADS)

    Ocak, Mahir E.

    The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.

  8. Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism

    NASA Astrophysics Data System (ADS)

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.

    2014-11-01

    The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.

  9. Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism

    DOE PAGES

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; ...

    2014-11-04

    Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less

  10. YAP Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Eric M.

    2004-05-20

    The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.

  11. Bloch equation and atom-field entanglement scenario in three-level systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Surajit; Nath, Mihir Ranjan; Dey, Tushar Kanti

    2011-09-23

    We study the exact solution of the lambda, vee and cascade type of three-level system with distinct Hamiltonian for each configuration expressed in the SU(3) basis. The semiclassical models are solved by solving respective Bloch equation and the existence of distinct non-linear constants are discussed which are different for different configuration. Apart from proposing a qutrit wave function, the atom-field entanglement is studied for the quantized three-level systems using the Phoenix-Knight formalism and corresponding population inversion are compared.

  12. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  13. A space-based climatology of diurnal MLT tidal winds, temperatures and densities from UARS wind measurements

    NASA Astrophysics Data System (ADS)

    Svoboda, Aaron A.; Forbes, Jeffrey M.; Miyahara, Saburo

    2005-11-01

    A self-consistent global tidal climatology, useful for comparing and interpreting radar observations from different locations around the globe, is created from space-based Upper Atmosphere Research Satellite (UARS) horizontal wind measurements. The climatology created includes tidal structures for horizontal winds, temperature and relative density, and is constructed by fitting local (in latitude and height) UARS wind data at 95 km to a set of basis functions called Hough mode extensions (HMEs). These basis functions are numerically computed modifications to Hough modes and are globally self-consistent in wind, temperature, and density. We first demonstrate this self-consistency with a proxy data set from the Kyushu University General Circulation Model, and then use a linear weighted superposition of the HMEs obtained from monthly fits to the UARS data to extrapolate the global, multi-variable tidal structure. A brief explanation of the HMEs’ origin is provided as well as information about a public website that has been set up to make the full extrapolated data sets available.

  14. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  15. Explicitly-correlated Gaussian geminals in electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Szalewicz, Krzysztof; Jeziorski, Bogumił

    2010-11-01

    Explicitly correlated functions have been used since 1929, but initially only for two-electron systems. In 1960, Boys and Singer showed that if the correlating factor is of Gaussian form, many-electron integrals can be computed for general molecules. The capability of explicitly correlated Gaussian (ECG) functions to accurately describe many-electron atoms and molecules was demonstrated only in the early 1980s when Monkhorst, Zabolitzky and the present authors cast the many-body perturbation theory (MBPT) and coupled cluster (CC) equations as a system of integro-differential equations and developed techniques of solving these equations with two-electron ECG functions (Gaussian-type geminals, GTG). This work brought a new accuracy standard to MBPT/CC calculations. In 1985, Kutzelnigg suggested that the linear r 12 correlating factor can also be employed if n-electron integrals, n > 2, are factorised with the resolution of identity. Later, this factor was replaced by more general functions f (r 12), most often by ? , usually represented as linear combinations of Gaussian functions which makes the resulting approach (called F12) a special case of the original GTG expansion. The current state-of-art is that, for few-electron molecules, ECGs provide more accurate results than any other basis available, but for larger systems the F12 approach is the method of choice, giving significant improvements over orbital calculations.

  16. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-01-01

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N2.5) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  17. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction.

    PubMed

    Song, Chenchen; Martínez, Todd J

    2017-01-21

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N 2.5 ) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  18. Ab initio molecular simulations with numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias

    2009-11-01

    We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.

  19. Benchmark of Ab Initio Bethe-Salpeter Equation Approach with Numeric Atom-Centered Orbitals

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Kloppenburg, Jan; Kanai, Yosuke; Blum, Volker

    The Bethe-Salpeter equation (BSE) approach based on the GW approximation has been shown to be successful for optical spectra prediction of solids and recently also for small molecules. We here present an all-electron implementation of the BSE using numeric atom-centered orbital (NAO) basis sets. In this work, we present benchmark of BSE implemented in FHI-aims for low-lying excitation energies for a set of small organic molecules, the well-known Thiel's set. The difference between our implementation (using an analytic continuation of the GW self-energy on the real axis) and the results generated by a fully frequency dependent GW treatment on the real axis is on the order of 0.07 eV for the benchmark molecular set. We study the convergence behavior to the complete basis set limit for excitation spectra, using a group of valence correlation consistent NAO basis sets (NAO-VCC-nZ), as well as for standard NAO basis sets for ground state DFT with extended augmentation functions (NAO+aug). The BSE results and convergence behavior are compared to linear-response time-dependent DFT, where excellent numerical convergence is shown for NAO+aug basis sets.

  20. Quantum Mechanical Calculations of Monoxides of Silicon Carbide Molecules

    DTIC Science & Technology

    2003-03-01

    Data for CO Final Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -112.6850703739 2.02121 -1 2 DVZ...Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -363.7341927429 0.617643 -1 2 DVZ -363.7114852831 0 3 DVZ...Input Geometry Output Geometry Basis Set Final Energy (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE -1 2 O-C-Si Linear O-C-Si Linear DZV -401.5363

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miliordos, Evangelos; Aprà, Edoardo; Xantheas, Sotiris S.

    We establish a new estimate for the binding energy between two benzene molecules in the parallel-displaced (PD) conformation by systematically converging (i) the intra- and intermolecular geometry at the minimum, (ii) the expansion of the orbital basis set, and (iii) the level of electron correlation. The calculations were performed at the second-order Møller–Plesset perturbation (MP2) and the coupled cluster including singles, doubles, and a perturbative estimate of triples replacement [CCSD(T)] levels of electronic structure theory. At both levels of theory, by including results corrected for basis set superposition error (BSSE), we have estimated the complete basis set (CBS) limit bymore » employing the family of Dunning’s correlation-consistent polarized valence basis sets. The largest MP2 calculation was performed with the cc-pV6Z basis set (2772 basis functions), whereas the largest CCSD(T) calculation was with the cc-pV5Z basis set (1752 basis functions). The cluster geometries were optimized with basis sets up to quadruple-ζ quality, observing that both its intra- and intermolecular parts have practically converged with the triple-ζ quality sets. The use of converged geometries was found to play an important role for obtaining accurate estimates for the CBS limits. Our results demonstrate that the binding energies with the families of the plain (cc-pVnZ) and augmented (aug-cc-pVnZ) sets converge [within <0.01 kcal/mol for MP2 and <0.15 kcal/mol for CCSD(T)] to the same CBS limit. In addition, the average of the uncorrected and BSSE-corrected binding energies was found to converge to the same CBS limit much faster than either of the two constituents (uncorrected or BSSE-corrected binding energies). Due to the fact that the family of augmented basis sets (especially for the larger sets) causes serious linear dependency problems, the plain basis sets (for which no linear dependencies were found) are deemed as a more efficient and straightforward path for obtaining an accurate CBS limit. We considered extrapolations of the uncorrected (ΔE) and BSSE-corrected (ΔE cp) binding energies, their average value (ΔE ave), as well as the average of the latter over the plain and augmented sets (Δ~E ave) with the cardinal number of the basis set n. Our best estimate of the CCSD(T)/CBS limit for the π–π binding energy in the PD benzene dimer is D e = -2.65 ± 0.02 kcal/mol. The best CCSD(T)/cc-pV5Z calculated value is -2.62 kcal/mol, just 0.03 kcal/mol away from the CBS limit. For comparison, the MP2/CBS limit estimate is -5.00 ± 0.01 kcal/mol, demonstrating a 90% overbinding with respect to CCSD(T). Finally, the spin-component-scaled (SCS) MP2 variant was found to closely reproduce the CCSD(T) results for each basis set, while scaled opposite spin (SOS) MP2 yielded results that are too low when compared to CCSD(T).« less

  2. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  3. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  4. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  5. Rapid Computation of Thermodynamic Properties over Multidimensional Nonbonded Parameter Spaces Using Adaptive Multistate Reweighting.

    PubMed

    Naden, Levi N; Shirts, Michael R

    2016-04-12

    We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.

  6. Theoretical foundations of apparent-damping phenomena and nearly irreversible energy exchange in linear conservative systems.

    PubMed

    Carcaterra, A; Akay, A

    2007-04-01

    This paper discusses a class of unexpected irreversible phenomena that can develop in linear conservative systems and provides a theoretical foundation that explains the underlying principles. Recent studies have shown that energy can be introduced to a linear system with near irreversibility, or energy within a system can migrate to a subsystem nearly irreversibly, even in the absence of dissipation, provided that the system has a particular natural frequency distribution. The present work introduces a general theory that provides a mathematical foundation and a physical explanation for the near irreversibility phenomena observed and reported in previous publications. Inspired by the properties of probability distribution functions, the general formulation developed here is based on particular properties of harmonic series, which form the common basis of linear dynamic system models. The results demonstrate the existence of a special class of linear nondissipative dynamic systems that exhibit nearly irreversible energy exchange and possess a decaying impulse response. In addition to uncovering a new class of dynamic system properties, the results have far-reaching implications in engineering applications where classical vibration damping or absorption techniques may not be effective. Furthermore, the results also support the notion of nearly irreversible energy transfer in conservative linear systems, which until now has been a concept associated exclusively with nonlinear systems.

  7. A new near-linear scaling, efficient and accurate, open-shell domain-based local pair natural orbital coupled cluster singles and doubles theory.

    PubMed

    Saitow, Masaaki; Becker, Ute; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2017-04-28

    The Coupled-Cluster expansion, truncated after single and double excitations (CCSD), provides accurate and reliable molecular electronic wave functions and energies for many molecular systems around their equilibrium geometries. However, the high computational cost, which is well-known to scale as O(N 6 ) with system size N, has limited its practical application to small systems consisting of not more than approximately 20-30 atoms. To overcome these limitations, low-order scaling approximations to CCSD have been intensively investigated over the past few years. In our previous work, we have shown that by combining the pair natural orbital (PNO) approach and the concept of orbital domains it is possible to achieve fully linear scaling CC implementations (DLPNO-CCSD and DLPNO-CCSD(T)) that recover around 99.9% of the total correlation energy [C. Riplinger et al., J. Chem. Phys. 144, 024109 (2016)]. The production level implementations of the DLPNO-CCSD and DLPNO-CCSD(T) methods were shown to be applicable to realistic systems composed of a few hundred atoms in a routine, black-box fashion on relatively modest hardware. In 2011, a reduced-scaling CCSD approach for high-spin open-shell unrestricted Hartree-Fock reference wave functions was proposed (UHF-LPNO-CCSD) [A. Hansen et al., J. Chem. Phys. 135, 214102 (2011)]. After a few years of experience with this method, a few shortcomings of UHF-LPNO-CCSD were noticed that required a redesign of the method, which is the subject of this paper. To this end, we employ the high-spin open-shell variant of the N-electron valence perturbation theory formalism to define the initial guess wave function, and consequently also the open-shell PNOs. The new PNO ansatz properly converges to the closed-shell limit since all truncations and approximations have been made in strict analogy to the closed-shell case. Furthermore, given the fact that the formalism uses a single set of orbitals, only a single PNO integral transformation is necessary, which offers large computational savings. We show that, with the default PNO truncation parameters, approximately 99.9% of the total CCSD correlation energy is recovered for open-shell species, which is comparable to the performance of the method for closed-shells. UHF-DLPNO-CCSD shows a linear scaling behavior for closed-shell systems, while linear to quadratic scaling is obtained for open-shell systems. The largest systems we have considered contain more than 500 atoms and feature more than 10 000 basis functions with a triple-ζ quality basis set.

  8. A new near-linear scaling, efficient and accurate, open-shell domain-based local pair natural orbital coupled cluster singles and doubles theory

    NASA Astrophysics Data System (ADS)

    Saitow, Masaaki; Becker, Ute; Riplinger, Christoph; Valeev, Edward F.; Neese, Frank

    2017-04-01

    The Coupled-Cluster expansion, truncated after single and double excitations (CCSD), provides accurate and reliable molecular electronic wave functions and energies for many molecular systems around their equilibrium geometries. However, the high computational cost, which is well-known to scale as O(N6) with system size N, has limited its practical application to small systems consisting of not more than approximately 20-30 atoms. To overcome these limitations, low-order scaling approximations to CCSD have been intensively investigated over the past few years. In our previous work, we have shown that by combining the pair natural orbital (PNO) approach and the concept of orbital domains it is possible to achieve fully linear scaling CC implementations (DLPNO-CCSD and DLPNO-CCSD(T)) that recover around 99.9% of the total correlation energy [C. Riplinger et al., J. Chem. Phys. 144, 024109 (2016)]. The production level implementations of the DLPNO-CCSD and DLPNO-CCSD(T) methods were shown to be applicable to realistic systems composed of a few hundred atoms in a routine, black-box fashion on relatively modest hardware. In 2011, a reduced-scaling CCSD approach for high-spin open-shell unrestricted Hartree-Fock reference wave functions was proposed (UHF-LPNO-CCSD) [A. Hansen et al., J. Chem. Phys. 135, 214102 (2011)]. After a few years of experience with this method, a few shortcomings of UHF-LPNO-CCSD were noticed that required a redesign of the method, which is the subject of this paper. To this end, we employ the high-spin open-shell variant of the N-electron valence perturbation theory formalism to define the initial guess wave function, and consequently also the open-shell PNOs. The new PNO ansatz properly converges to the closed-shell limit since all truncations and approximations have been made in strict analogy to the closed-shell case. Furthermore, given the fact that the formalism uses a single set of orbitals, only a single PNO integral transformation is necessary, which offers large computational savings. We show that, with the default PNO truncation parameters, approximately 99.9% of the total CCSD correlation energy is recovered for open-shell species, which is comparable to the performance of the method for closed-shells. UHF-DLPNO-CCSD shows a linear scaling behavior for closed-shell systems, while linear to quadratic scaling is obtained for open-shell systems. The largest systems we have considered contain more than 500 atoms and feature more than 10 000 basis functions with a triple-ζ quality basis set.

  9. Cotton-Mouton effect and shielding polarizabilities of ethylene: An MCSCF study

    NASA Astrophysics Data System (ADS)

    Coriani, Sonia; Rizzo, Antonio; Ruud, Kenneth; Helgaker, Trygve

    1997-03-01

    The static hypermagnetizabilities and nuclear shielding polarizabilities of the carbon and hydrogen atoms of ethylene have been computed using multiconfigurational linear-response theory and a finite-field method, in a mixed analytical-numerical approach. Extended sets of magnetic-field-dependent basis functions have been employed in large MCSCF calculations, involving active spaces giving rise to a few million configurations in the finite-field perturbed symmetry. The convergence of the observables with respect to the extension of the basis set as well as the effect of electron correlation have been investigated. Whereas for the shielding polarizabilities we can compare with other published SCF results, the ab initio estimates for the static hypermagnetizabilities and the observable to which they are related - the Cotton-Mouton constant, - are presented for the first time.

  10. Reduced conservatism in stability robustness bounds by state transformation

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.; Liang, Z.

    1986-01-01

    This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.

  11. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 1: Model Description and User's Manual

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or bead-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration. The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.

  12. Synthesis, spectroscopic characterization and quantum chemical computational studies of (S)-N-benzyl-1-phenyl-5-(pyridin-2-yl)-pent-4-yn-2-amine

    NASA Astrophysics Data System (ADS)

    Kose, Etem; Atac, Ahmet; Karabacak, Mehmet; Karaca, Caglar; Eskici, Mustafa; Karanfil, Abdullah

    2012-11-01

    The synthesis and characterization of a novel compound (S)-N-benzyl-1-phenyl-5-(pyridin-2-yl)-pent-4-yn-2-amine (abbreviated as BPPPYA) was presented in this study. The spectroscopic properties of the compound were investigated by FT-IR, NMR and UV spectroscopy experimentally and theoretically. The molecular geometry and vibrational frequencies of the BPPPYA in the ground state were calculated by using density functional theory (DFT) B3LYP method invoking 6-311++G(d,p) basis set. The geometry of the BPPPYA was fully optimized, vibrational spectra were calculated and fundamental vibrations were assigned on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method and PQS program. The results of the energy and oscillator strength calculated by time-dependent density functional theory (TD-DFT) and CIS approach complement with the experimental findings. Total and partial density of state (TDOS and PDOS) and also overlap population density of state (COOP or OPDOS) diagrams analysis were presented. The theoretical NMR chemical shifts (1H and 13C) complement with experimentally measured ones. The dipole moment, linear polarizability and first hyperpolarizability values were also computed. The linear polarizabilities and first hyper polarizabilities of the studied molecule indicate that the compound is a good candidate of nonlinear optical materials. The calculated vibrational wavenumbers, absorption wavelengths and chemical shifts showed the best agreement with the experimental results.

  13. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems; Part 1, Model description and user's manual

    USGS Publications Warehouse

    Torak, Lynn J.

    1992-01-01

    A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration.The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.

  14. Modelling and prediction for chaotic fir laser attractor using rational function neural network.

    PubMed

    Cho, S

    2001-02-01

    Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.

  15. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  16. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  17. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 2: Derivation of finite-element equations and comparisons with analytical solutions

    USGS Publications Warehouse

    Cooley, Richard L.

    1992-01-01

    MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.

  18. What procedure to choose while designing a fuzzy control? Towards mathematical foundations of fuzzy control

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Quintana, Chris; Lea, Robert

    1991-01-01

    Fuzzy control has been successfully applied in industrial systems. However, there is some caution in using it. The reason is that it is based on quite reasonable ideas, but each of these ideas can be implemented in several different ways, and depending on which of the implementations chosen different results are achieved. Some implementations lead to a high quality control, some of them not. And since there are no theoretical methods for choosing the implementation, the basic way to choose it now is experimental. But if one chooses a method that is good for several examples, there is no guarantee that it will work fine in all of them. Hence the caution. A theoretical basis for choosing the fuzzy control procedures is provided. In order to choose a procedure that transforms a fuzzy knowledge into a control, one needs, first, to choose a membership function for each of the fuzzy terms that the experts use, second, to choose operations of uncertainty values that corresponds to 'and' and 'or', and third, when a membership function for control is obtained, one must defuzzy it, that is, somehow generate a value of the control u that will be actually used. A general approach that will help to make all these choices is described: namely, it is proved that under reasonable assumptions membership functions should be linear or fractionally linear, defuzzification must be described by a centroid rule and describe all possible 'and' and 'or' operations. Thus, a theoretical explanation of the existing semi-heuristic choices is given and the basis for the further research on optimal fuzzy control is formulated.

  19. A 4D Hyperspherical Interpretation of q-Space

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Bendlin, Barbara B.; Alexander, Andrew L.

    2015-01-01

    3D q-space can be viewed as the surface of a 4D hypersphere. In this paper, we seek to develop a 4D hyperspherical interpretation of q-space by projecting it onto a hypersphere and subsequently modeling the q-space signal via 4D hyperspherical harmonics (HSH). Using this orthonormal basis, we derive several well-established q-space indices and numerically estimate the diffusion orientation distribution function (dODF). We also derive the integral transform describing the relationship between the diffusion signal and propagator on a hypersphere. Most importantly, we will demonstrate that for hybrid diffusion imaging (HYDI) acquisitions low order linear expansion of the HSH basis is sufficient to characterize diffusion in neural tissue. In fact, the HSH basis achieves comparable signal and better dODF reconstructions than other well-established methods, such as Bessel Fourier orientation reconstruction (BFOR), using fewer fitting parameters. All in all, this work provides a new way of looking at q-space. PMID:25624043

  20. A 4D hyperspherical interpretation of q-space.

    PubMed

    Pasha Hosseinbor, A; Chung, Moo K; Wu, Yu-Chien; Bendlin, Barbara B; Alexander, Andrew L

    2015-04-01

    3D q-space can be viewed as the surface of a 4D hypersphere. In this paper, we seek to develop a 4D hyperspherical interpretation of q-space by projecting it onto a hypersphere and subsequently modeling the q-space signal via 4D hyperspherical harmonics (HSH). Using this orthonormal basis, we derive several well-established q-space indices and numerically estimate the diffusion orientation distribution function (dODF). We also derive the integral transform describing the relationship between the diffusion signal and propagator on a hypersphere. Most importantly, we will demonstrate that for hybrid diffusion imaging (HYDI) acquisitions low order linear expansion of the HSH basis is sufficient to characterize diffusion in neural tissue. In fact, the HSH basis achieves comparable signal and better dODF reconstructions than other well-established methods, such as Bessel Fourier orientation reconstruction (BFOR), using fewer fitting parameters. All in all, this work provides a new way of looking at q-space. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending

    PubMed Central

    Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang

    2014-01-01

    A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403

  2. LETTER TO THE EDITOR: Two-centre exchange integrals for complex exponent Slater orbitals

    NASA Astrophysics Data System (ADS)

    Kuang, Jiyun; Lin, C. D.

    1996-12-01

    The one-dimensional integral representation for the Fourier transform of a two-centre product of B functions (finite linear combinations of Slater orbitals) with real parameters is generalized to include B functions with complex parameters. This one-dimensional integral representation allows for an efficient method of calculating two-centre exchange integrals with plane-wave electronic translational factors (ETF) over Slater orbitals of real/complex exponents. This method is a significant improvement on the previous two-dimensional quadrature method of the integrals. A new basis set of the form 0953-4075/29/24/005/img1 is proposed to improve the description of pseudo-continuum states in the close-coupling treatment of ion - atom collisions.

  3. Heavy and Heavy-Light Mesons in the Covariant Spectator Theory

    NASA Astrophysics Data System (ADS)

    Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.

    2018-05-01

    The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.

  4. Guidelines for VCCT-Based Interlaminar Fatigue and Progressive Failure Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Deobald, Lyle R.; Mabson, Gerald E.; Engelstad, Steve; Prabhakar, M.; Gurvich, Mark; Seneviratne, Waruna; Perera, Shenal; O'Brien, T. Kevin; Murri, Gretchen; Ratcliffe, James; hide

    2017-01-01

    This document is intended to detail the theoretical basis, equations, references and data that are necessary to enhance the functionality of commercially available Finite Element codes, with the objective of having functionality better suited for the aerospace industry in the area of composite structural analysis. The specific area of focus will be improvements to composite interlaminar fatigue and progressive interlaminar failure. Suggestions are biased towards codes that perform interlaminar Linear Elastic Fracture Mechanics (LEFM) using Virtual Crack Closure Technique (VCCT)-based algorithms [1,2]. All aspects of the science associated with composite interlaminar crack growth are not fully developed and the codes developed to predict this mode of failure must be programmed with sufficient flexibility to accommodate new functional relationships as the science matures.

  5. Multivariate functional response regression, with application to fluorescence spectroscopy in a cervical pre-cancer study.

    PubMed

    Zhu, Hongxiao; Morris, Jeffrey S; Wei, Fengrong; Cox, Dennis D

    2017-07-01

    Many scientific studies measure different types of high-dimensional signals or images from the same subject, producing multivariate functional data. These functional measurements carry different types of information about the scientific process, and a joint analysis that integrates information across them may provide new insights into the underlying mechanism for the phenomenon under study. Motivated by fluorescence spectroscopy data in a cervical pre-cancer study, a multivariate functional response regression model is proposed, which treats multivariate functional observations as responses and a common set of covariates as predictors. This novel modeling framework simultaneously accounts for correlations between functional variables and potential multi-level structures in data that are induced by experimental design. The model is fitted by performing a two-stage linear transformation-a basis expansion to each functional variable followed by principal component analysis for the concatenated basis coefficients. This transformation effectively reduces the intra-and inter-function correlations and facilitates fast and convenient calculation. A fully Bayesian approach is adopted to sample the model parameters in the transformed space, and posterior inference is performed after inverse-transforming the regression coefficients back to the original data domain. The proposed approach produces functional tests that flag local regions on the functional effects, while controlling the overall experiment-wise error rate or false discovery rate. It also enables functional discriminant analysis through posterior predictive calculation. Analysis of the fluorescence spectroscopy data reveals local regions with differential expressions across the pre-cancer and normal samples. These regions may serve as biomarkers for prognosis and disease assessment.

  6. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  7. Using EIGER for Antenna Design and Analysis

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Khayat, Michael; Kennedy, Timothy F.; Fink, Patrick W.

    2007-01-01

    EIGER (Electromagnetic Interactions GenERalized) is a frequency-domain electromagnetics software package that is built upon a flexible framework, designed using object-oriented techniques. The analysis methods used include moment method solutions of integral equations, finite element solutions of partial differential equations, and combinations thereof. The framework design permits new analysis techniques (boundary conditions, Green#s functions, etc.) to be added to the software suite with a sensible effort. The code has been designed to execute (in serial or parallel) on a wide variety of platforms from Intel-based PCs and Unix-based workstations. Recently, new potential integration scheme s that avoid singularity extraction techniques have been added for integral equation analysis. These new integration schemes are required for facilitating the use of higher-order elements and basis functions. Higher-order elements are better able to model geometrical curvature using fewer elements than when using linear elements. Higher-order basis functions are beneficial for simulating structures with rapidly varying fields or currents. Results presented here will demonstrate curren t and future capabilities of EIGER with respect to analysis of installed antenna system performance in support of NASA#s mission of exploration. Examples include antenna coupling within an enclosed environment and antenna analysis on electrically large manned space vehicles.

  8. Amplitude modulation detection by human listeners in sound fields.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal

    2011-10-01

    The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.

  9. Dispersion interactions with linear scaling DFT: a study of planar molecules on charged polar surfaces

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Haynes, Peter; Mostofi, Arash

    2010-03-01

    The placement of organic molecules such as CuPc (copper phthalocyanine) on wurtzite ZnO (zinc oxide) charged surfaces has been proposed as a way of creating photovoltaic solar cellsfootnotetextG.D. Sharma et al., Solar Energy Materials & Solar Cells 90, 933 (2006) ; optimising their performance may be aided by computational simulation. Electronic structure calculations provide high accuracy at modest computational cost but two challenges are encountered for such layered systems. First, the system size is at or beyond the limit of traditional cubic-scaling Density Functional Theory (DFT). Second, traditional exchange-correlation functionals do not account for van der Waals (vdW) interactions, crucial for determining the structure of weakly bonded systems. We present an implementation of recently developed approachesfootnotetextP.L. Silvestrelli, P.R.L. 100, 102 (2008) to include vdW in DFT within ONETEPfootnotetextC.-K. Skylaris, P.D. Haynes, A.A. Mostofi and M.C. Payne, J.C.P. 122, 084119 (2005) , a linear-scaling package for performing DFT calculations using a basis of localised functions. We have applied this methodology to simple planar organic molecules, such as benzene and pentacene, on ZnO surfaces.

  10. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  11. Accurate potential energy surface for the 1(2)A' state of NH(2): scaling of external correlation versus extrapolation to the complete basis set limit.

    PubMed

    Li, Y Q; Varandas, A J C

    2010-09-16

    An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.

  12. How to characterize a nonlinear elastic material? A review on nonlinear constitutive parameters in isotropic finite elasticity

    PubMed Central

    2017-01-01

    The mechanical response of a homogeneous isotropic linearly elastic material can be fully characterized by two physical constants, the Young’s modulus and the Poisson’s ratio, which can be derived by simple tensile experiments. Any other linear elastic parameter can be obtained from these two constants. By contrast, the physical responses of nonlinear elastic materials are generally described by parameters which are scalar functions of the deformation, and their particular choice is not always clear. Here, we review in a unified theoretical framework several nonlinear constitutive parameters, including the stretch modulus, the shear modulus and the Poisson function, that are defined for homogeneous isotropic hyperelastic materials and are measurable under axial or shear experimental tests. These parameters represent changes in the material properties as the deformation progresses, and can be identified with their linear equivalent when the deformations are small. Universal relations between certain of these parameters are further established, and then used to quantify nonlinear elastic responses in several hyperelastic models for rubber, soft tissue and foams. The general parameters identified here can also be viewed as a flexible basis for coupling elastic responses in multi-scale processes, where an open challenge is the transfer of meaningful information between scales. PMID:29225507

  13. Probability, not linear summation, mediates the detection of concentric orientation-defined textures.

    PubMed

    Schmidtmann, Gunnar; Jennings, Ben J; Bell, Jason; Kingdom, Frederick A A

    2015-01-01

    Previous studies investigating signal integration in circular Glass patterns have concluded that the information in these patterns is linearly summed across the entire display for detection. Here we test whether an alternative form of summation, probability summation (PS), modeled under the assumptions of Signal Detection Theory (SDT), can be rejected as a model of Glass pattern detection. PS under SDT alone predicts that the exponent β of the Quick- (or Weibull-) fitted psychometric function should decrease with increasing signal area. We measured spatial integration in circular, radial, spiral, and parallel Glass patterns, as well as comparable patterns composed of Gabors instead of dot pairs. We measured the signal-to-noise ratio required for detection as a function of the size of the area containing signal, with the remaining area containing dot-pair or Gabor-orientation noise. Contrary to some previous studies, we found that the strength of summation never reached values close to linear summation for any stimuli. More importantly, the exponent β systematically decreased with signal area, as predicted by PS under SDT. We applied a model for PS under SDT and found that it gave a good account of the data. We conclude that probability summation is the most likely basis for the detection of circular, radial, spiral, and parallel orientation-defined textures.

  14. Climate Intervention as an Optimization Problem

    NASA Astrophysics Data System (ADS)

    Caldeira, Ken; Ban-Weiss, George A.

    2010-05-01

    Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.

  15. Linear indices of the "molecular pseudograph's atom adjacency matrix": definition, significance-interpretation, and application to QSAR analysis of flavone derivatives as HIV-1 integrase inhibitors.

    PubMed

    Marrero-Ponce, Yovani

    2004-01-01

    This report describes a new set of molecular descriptors of relevance to QSAR/QSPR studies and drug design, atom linear indices fk(xi). These atomic level chemical descriptors are based on the calculation of linear maps on Rn[fk(xi): Rn--> Rn] in canonical basis. In this context, the kth power of the molecular pseudograph's atom adjacency matrix [Mk(G)] denotes the matrix of fk(xi) with respect to the canonical basis. In addition, a local-fragment (atom-type) formalism was developed. The kth atom-type linear indices are calculated by summing the kth atom linear indices of all atoms of the same atom type in the molecules. Moreover, total (whole-molecule) linear indices are also proposed. This descriptor is a linear functional (linear form) on Rn. That is, the kth total linear indices is a linear map from Rn to the scalar R[ fk(x): Rn --> R]. Thus, the kth total linear indices are calculated by summing the atom linear indices of all atoms in the molecule. The features of the kth total and local linear indices are illustrated by examples of various types of molecular structures, including chain-lengthening, branching, heteroatoms-content, and multiple bonds. Additionally, the linear independence of the local linear indices to other 0D, 1D, 2D, and 3D molecular descriptors is demonstrated by using principal component analysis for 42 very heterogeneous molecules. Much redundancy and overlapping was found among total linear indices and most of the other structural indices presently in use in the QSPR/QSAR practice. On the contrary, the information carried by atom-type linear indices was strikingly different from that codified in most of the 229 0D-3D molecular descriptors used in this study. It is concluded that the local linear indices are an independent indices containing important structural information to be used in QSPR/QSAR and drug design studies. In this sense, atom, atom-type, and total linear indices were used for the prediction of pIC50 values for the cleavage process of a set of flavone derivatives inhibitors of HIV-1 integrase. Quantitative models found are significant from a statistical point of view (R of 0.965, 0.902, and 0.927, respectively) and permit a clear interpretation of the studied properties in terms of the structural features of molecules. A LOO cross-validation procedure revealed that the regression models had a fairly good predictability (q2 of 0.679, 0.543, and 0.721, respectively). The comparison with other approaches reveals good behavior of the method proposed. The approach described in this paper appears to be an excellent alternative or guides for discovery and optimization of new lead compounds.

  16. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  17. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  18. Finite element analysis of periodic transonic flow problems

    NASA Technical Reports Server (NTRS)

    Fix, G. J.

    1978-01-01

    Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.

  19. The Boundary Function Method. Fundamentals

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-03-01

    The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.

  20. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  1. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  2. Cocaine dependence and thalamic functional connectivity: a multivariate pattern analysis.

    PubMed

    Zhang, Sheng; Hu, Sien; Sinha, Rajita; Potenza, Marc N; Malison, Robert T; Li, Chiang-Shan R

    2016-01-01

    Cocaine dependence is associated with deficits in cognitive control. Previous studies demonstrated that chronic cocaine use affects the activity and functional connectivity of the thalamus, a subcortical structure critical for cognitive functioning. However, the thalamus contains nuclei heterogeneous in functions, and it is not known how thalamic subregions contribute to cognitive dysfunctions in cocaine dependence. To address this issue, we used multivariate pattern analysis (MVPA) to examine how functional connectivity of the thalamus distinguishes 100 cocaine-dependent participants (CD) from 100 demographically matched healthy control individuals (HC). We characterized six task-related networks with independent component analysis of fMRI data of a stop signal task and employed MVPA to distinguish CD from HC on the basis of voxel-wise thalamic connectivity to the six independent components. In an unbiased model of distinct training and testing data, the analysis correctly classified 72% of subjects with leave-one-out cross-validation (p < 0.001), superior to comparison brain regions with similar voxel counts (p < 0.004, two-sample t test). Thalamic voxels that form the basis of classification aggregate in distinct subclusters, suggesting that connectivities of thalamic subnuclei distinguish CD from HC. Further, linear regressions provided suggestive evidence for a correlation of the thalamic connectivities with clinical variables and performance measures on the stop signal task. Together, these findings support thalamic circuit dysfunction in cognitive control as an important neural marker of cocaine dependence.

  3. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  4. Development of many-body polarizable force fields for Li-battery components: 1. Ether, alkane, and carbonate-based solvents.

    PubMed

    Borodin, Oleg; Smith, Grant D

    2006-03-30

    Classical many-body polarizable force fields were developed for n-alkanes, perflouroalkanes, polyethers, ketones, and linear and cyclic carbonates on the basis of quantum chemistry dimer energies of model compounds and empirical thermodynamic liquid-state properties. The dependence of the electron correlation contribution to the dimer binding energy on basis-set size and level of theory was investigated as a function of molecular separation for a number of alkane, ether, and ketone dimers. Molecular dynamics (MD) simulations of the force fields accurately predicted structural, dynamic, and transport properties of liquids and unentangled polymer melts. On average, gas-phase dimer binding energies predicted with the force field were between those from MP2/aug-cc-pvDz and MP2/aug-cc-pvTz quantum chemistry calculations.

  5. Mathematical modeling of aeroelastic systems

    NASA Astrophysics Data System (ADS)

    Velmisov, Petr A.; Ankilov, Andrey V.; Semenova, Elizaveta P.

    2017-12-01

    In the paper, the stability of elastic elements of a class of designs that are in interaction with a gas or liquid flow is investigated. The definition of the stability of an elastic body corresponds to the concept of stability of dynamical systems by Lyapunov. As examples the mathematical models of flowing channels (models of vibration devices) at a subsonic flow and the mathematical models of protective surface at a supersonic flow are considered. Models are described by the related systems of the partial differential equations. An analytic investigation of stability is carried out on the basis of the construction of Lyapunov-type functionals, a numerical investigation is carried out on the basis of the Galerkin method. The various models of the gas-liquid environment (compressed, incompressible) and the various models of a deformable body (elastic linear and elastic nonlinear) are considered.

  6. A Prototype SSVEP Based Real Time BCI Gaming System

    PubMed Central

    Martišius, Ignas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414

  7. A Prototype SSVEP Based Real Time BCI Gaming System.

    PubMed

    Martišius, Ignas; Damaševičius, Robertas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.

  8. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  9. The two-electron atomic systems. S-states

    NASA Astrophysics Data System (ADS)

    Liverts, Evgeny Z.; Barnea, Nir

    2010-01-01

    A simple Mathematica program for computing the S-state energies and wave functions of two-electron (helium-like) atoms (ions) is presented. The well-known method of projecting the Schrödinger equation onto the finite subspace of basis functions was applied. The basis functions are composed of the exponentials combined with integer powers of the simplest perimetric coordinates. No special subroutines were used, only built-in objects supported by Mathematica. The accuracy of results and computation time depend on the basis size. The precise energy values of 7-8 significant figures along with the corresponding wave functions can be computed on a single processor within a few minutes. The resultant wave functions have a simple analytical form consisting of elementary functions, that enables one to calculate the expectation values of arbitrary physical operators without any difficulties. Program summaryProgram title: TwoElAtom-S Catalogue identifier: AEFK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 185 No. of bytes in distributed program, including test data, etc.: 495 164 Distribution format: tar.gz Programming language: Mathematica 6.0; 7.0 Computer: Any PC Operating system: Any which supports Mathematica; tested under Microsoft Windows XP and Linux SUSE 11.0 RAM:⩾10 bytes Classification: 2.1, 2.2, 2.7, 2.9 Nature of problem: The Schrödinger equation for atoms (ions) with more than one electron has not been solved analytically. Approximate methods must be applied in order to obtain the wave functions or other physical attributes from quantum mechanical calculations. Solution method: The S-wave function is expanded into a triple basis set in three perimetric coordinates. Method of projecting the two-electron Schrödinger equation (for atoms/ions) onto a subspace of the basis functions enables one to obtain the set of homogeneous linear equations F.C=0 for the coefficients C of the above expansion. The roots of equation det(F)=0 yield the bound energies. Restrictions: First, the too large length of expansion (basis size) takes the too large computation time giving no perceptible improvement in accuracy. Second, the order of polynomial Ω (input parameter) in the wave function expansion enables one to calculate the excited nS-states up to n=Ω+1 inclusive. Additional comments: The CPC Program Library includes "A program to calculate the eigenfunctions of the random phase approximation for two electron systems" (AAJD). It should be emphasized that this fortran code realizes a very rough approximation describing only the averaged electron density of the two electron systems. It does not characterize the properties of the individual electrons and has a number of input parameters including the Roothaan orbitals. Running time: ˜10 minutes (depends on basis size and computer speed)

  10. Application of ab initio many-body perturbation theory with Gaussian basis sets to the singlet and triplet excitations of organic molecules

    NASA Astrophysics Data System (ADS)

    Hamed, Samia; Rangel, Tonatiuh; Bruneval, Fabien; Neaton, Jeffrey B.

    Quantitative understanding of charged and neutral excitations of organic molecules is critical in diverse areas of study that include astrophysics and the development of energy technologies that are clean and efficient. The recent use of local basis sets with ab initio many-body perturbation theory in the GW approximation and the Bethe-Saltpeter equation approach (BSE), methods traditionally applied to periodic condensed phases with a plane-wave basis, has opened the door to detailed study of such excitations for molecules, as well as accurate numerical benchmarks. Here, through a series of systematic benchmarks with a Gaussian basis, we report on the extent to which the predictive power and utility of this approach depend critically on interdependent underlying approximations and choices for molecules, including the mean-field starting point (eg optimally-tuned range separated hybrids, pure DFT functionals, and untuned hybrids), the GW scheme, and the Tamm Dancoff approximation. We demonstrate the effects of these choices in the context of Thiels' set while drawing analogies to linear-response time-dependent DFT and making comparisons to best theoretical estimates from higher-order wavefunction-based theories.

  11. Feature extraction across individual time series observations with spikes using wavelet principal component analysis.

    PubMed

    Røislien, Jo; Winje, Brita

    2013-09-20

    Clinical studies frequently include repeated measurements of individuals, often for long periods. We present a methodology for extracting common temporal features across a set of individual time series observations. In particular, the methodology explores extreme observations within the time series, such as spikes, as a possible common temporal phenomenon. Wavelet basis functions are attractive in this sense, as they are localized in both time and frequency domains simultaneously, allowing for localized feature extraction from a time-varying signal. We apply wavelet basis function decomposition of individual time series, with corresponding wavelet shrinkage to remove noise. We then extract common temporal features using linear principal component analysis on the wavelet coefficients, before inverse transformation back to the time domain for clinical interpretation. We demonstrate the methodology on a subset of a large fetal activity study aiming to identify temporal patterns in fetal movement (FM) count data in order to explore formal FM counting as a screening tool for identifying fetal compromise and thus preventing adverse birth outcomes. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  13. A Risk Management Method for the Operation of a Supply-Chain without Storage:

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yasuhiro; Manabe, Yuuji; Nakata, Norimasa; Kusaka, Satoshi

    A business risk management method has been developed for a supply-chain without a storage function under demand uncertainty. Power supply players in the deregulated power market face the need to develop the best policies for power supply from self-production and reserved purchases to balance demand, which is predictable with error. The proposed method maximizes profit from the operation of the supply-chain under probabilistic demand uncertainty on the basis of a probabilistic programming approach. Piece-wise linear functions are employed to formulate the impact of under-booked or over-booked purchases on the supply cost, and constraints on over-demand probability are introduced to limit over-demand frequency on the basis of the demand probability distribution. The developed method has been experimentally applied to the supply policy of a power-supply-chain, the operation of which is based on a 3-stage pricing purchase contract and on 28 time zones. The characteristics of the obtained optimal supply policy are successfully captured in the numerical results, which suggest the applicability of the proposed method.

  14. Indirect Validation of Probe Speed Data on Arterial Corridors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham

    This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less

  15. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    PubMed Central

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  16. Electron affinities of polycyclic aromatic hydrocarbons by means of B3LYP/6-31+G* calculations.

    PubMed

    Modelli, Alberto; Mussoni, Laura; Fabbri, Daniele

    2006-05-25

    The gas-phase experimental adiabatic electron affinities (AEAs) of the polycyclic aromatic hydrocarbons (PAHs) anthracene, tetracene, pentacene, chrysene, pyrene, benzo[a]pyrene, benzo[e]pyrene, and fluoranthene are well reproduced using the hybrid density functional method B3LYP with the 6-31+G* basis set, indicating that the smallest addition of diffuse functions to the basis set is suitable for a correct description of the stable PAH anion states. The calculated AEAs also give a very good linear correlation with available reduction potentials measured in solution. The AEAs (not experimentally available) of the isomeric benzo[ghi]fluoranthene and cyclopenta[cd]pyrene, commonly found in the environment, are predicted to be 0.817 and 1.108 eV, respectively, confirming the enhancement of the electron-acceptor properties associated with fusion of a peripheral cyclopenta ring. The calculated localization properties of the lowest unoccupied MO of cyclopenta[cd]pyrene, together with its relatively high electron affinity, account for a high reactivity at the ethene double bond of this PAH in reductive processes.

  17. A spectral reflectance estimation technique using multispectral data from the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique is formulated for constructing spectral reflectance curve estimates from multispectral data obtained with the Viking lander camera. The multispectral data are limited to six spectral channels in the wavelength range from 0.4 to 1.1 micrometers and most of these channels exhibit appreciable out-of-band response. The output of each channel is expressed as a linear (integral) function of the (known) solar irradiance, atmospheric transmittance, and camera spectral responsivity and the (unknown) spectral responsivity and the (unknown) spectral reflectance. This produces six equations which are used to determine the coefficients in a representation of the spectral reflectance as a linear combination of known basis functions. Natural cubic spline reflectance estimates are produced for a variety of materials that can be reasonably expected to occur on Mars. In each case the dominant reflectance features are accurately reproduced, but small period features are lost due to the limited number of channels. This technique may be a valuable aid in selecting the number of spectral channels and their responsivity shapes when designing a multispectral imaging system.

  18. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  19. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  20. Colloidal gold-modified optical fiber for chemical and biochemical sensing.

    PubMed

    Cheng, Shu-Fang; Chau, Lai-Kwan

    2003-01-01

    A novel class of fiber-optic evanescent-wave sensor was constructed on the basis of modification of the unclad portion of an optical fiber with self-assembled gold colloids. The optical properties and, hence, the attenuated total reflection spectrum of self-assembled gold colloids on the optical fiber changes with different refractive index of the environment near the colloidal gold surface. With sucrose solutions of increasing refractive index, the sensor response decreases linearly. The colloidal gold surface was also functionalized with glycine, succinic acid, or biotin to enhance the selectivity of the sensor. Results show that the sensor response decreases linearly with increasing concentration of each analyte. When the colloidal gold surface was functionalized with biotin, the detection limit of the sensor for streptavidin was 9.8 x 10(-11) M. Using this approach, we demonstrate proof-of-concept of a class of refractive index sensor that is sensitive to the refractive index of the environment near the colloidal gold surface and, hence, is suitable for label-free detection of molecular or biomolecular binding at the surface of gold colloids.

  1. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    PubMed

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  2. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  3. Can we detect a nonlinear response to temperature in European plant phenology?

    NASA Astrophysics Data System (ADS)

    Jochner, Susanne; Sparks, Tim H.; Laube, Julia; Menzel, Annette

    2016-10-01

    Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C-1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ˜14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.

  4. Construction of energy-stable Galerkin reduced order models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less

  5. Ground and excited states of vanadium hydroxide isomers and their cations, VOH0,+ and HVO0,+

    NASA Astrophysics Data System (ADS)

    Miliordos, Evangelos; Harrison, James F.; Hunt, Katharine L. C.

    2013-03-01

    Employing correlation consistent basis sets of quadruple-zeta quality and applying both multireference configuration interaction and single-reference coupled cluster methodologies, we studied the electronic and geometrical structure of the [V,O,H]0,+ species. The electronic structure of HVO0,+ is explained by considering a hydrogen atom approaching VO0,+, while VOH0,+ molecules are viewed in terms of the interaction of V+,2+ with OH-. The potential energy curves for H-VO0,+ and V0,+-OH have been constructed as functions of the distance between the interacting subunits, and the potential energy curves have also been determined as functions of the H-V-O angle. For the stationary points that we have located, we report energies, geometries, harmonic frequencies, and dipole moments. We find that the most stable bent HVO0,+ structure is lower in energy than any of the linear HVO0,+ structures. Similarly, the most stable state of bent VOH is lower in energy than the linear structures, but linear VOH+ is lower in energy than bent VOH+. The global minimum on the potential energy surface for the neutral species is the tilde{X}^3A″ state of bent HVO, although the tilde{X}^5A″ state of bent VOH is less than 5 kcal/mol higher in energy. The global minimum on the potential surface for the cation is the tilde{X}^4Σ ^- state of linear VOH+, with bent VOH+ and bent HVO+ both more than 10 kcal/mol higher in energy. For the neutral species, the bent geometries exhibit significantly higher dipole moments than the linear structures.

  6. Application of the control volume mixed finite element method to a triangular discretization

    USGS Publications Warehouse

    Naff, R.L.

    2012-01-01

    A two-dimensional control volume mixed finite element method is applied to the elliptic equation. Discretization of the computational domain is based in triangular elements. Shape functions and test functions are formulated on the basis of an equilateral reference triangle with unit edges. A pressure support based on the linear interpolation of elemental edge pressures is used in this formulation. Comparisons are made between results from the standard mixed finite element method and this control volume mixed finite element method. Published 2011. This article is a US Government work and is in the public domain in the USA. ?? 2012 John Wiley & Sons, Ltd. This article is a US Government work and is in the public domain in the USA.

  7. Expendable launch vehicle studies

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Reiss, Robert

    1995-01-01

    Analytical support studies of expendable launch vehicles concentrate on the stability of the dynamics during launch especially during or near the region of maximum dynamic pressure. The in-plane dynamic equations of a generic launch vehicle with multiple flexible bending and fuel sloshing modes are developed and linearized. The information from LeRC about the grids, masses, and modes is incorporated into the model. The eigenvalues of the plant are analyzed for several modeling factors: utilizing diagonal mass matrix, uniform beam assumption, inclusion of aerodynamics, and the interaction between the aerodynamics and the flexible bending motion. Preliminary PID, LQR, and LQG control designs with sensor and actuator dynamics for this system and simulations are also conducted. The initial analysis for comparison of PD (proportional-derivative) and full state feedback LQR Linear quadratic regulator) shows that the split weighted LQR controller has better performance than that of the PD. In order to meet both the performance and robustness requirements, the H(sub infinity) robust controller for the expendable launch vehicle is developed. The simulation indicates that both the performance and robustness of the H(sub infinity) controller are better than that for the PID and LQG controllers. The modelling and analysis support studies team has continued development of methodology, using eigensensitivity analysis, to solve three classes of discrete eigenvalue equations. In the first class, the matrix elements are non-linear functions of the eigenvector. All non-linear periodic motion can be cast in this form. Here the eigenvector is comprised of the coefficients of complete basis functions spanning the response space and the eigenvalue is the frequency. The second class of eigenvalue problems studied is the quadratic eigenvalue problem. Solutions for linear viscously damped structures or viscoelastic structures can be reduced to this form. Particular attention is paid to Maxwell and Kelvin models. The third class of problems consists of linear eigenvalue problems in which the elements of the mass and stiffness matrices are stochastic. dynamic structural response for which the parameters are given by probabilistic distribution functions, rather than deterministic values, can be cast in this form. Solutions for several problems in each class will be presented.

  8. A Comparison of Traditional Worksheet and Linear Programming Methods for Teaching Manure Application Planning.

    ERIC Educational Resources Information Center

    Schmitt, M. A.; And Others

    1994-01-01

    Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)

  9. The 129Xe nuclear shielding surfaces for Xe interacting with linear molecules CO2, N2, and CO

    NASA Astrophysics Data System (ADS)

    de Dios, Angel C.; Jameson, Cynthia J.

    1997-09-01

    We have calculated the intermolecular nuclear magnetic shielding surfaces for 129Xe in the systems Xe-CO2, Xe-N2, and Xe-CO using a gauge-invariant ab initio method at the coupled Hartree-Fock level with gauge-including atomic orbitals (GIAO). Implementation of a large basis set (240 basis functions) on the Xe gives very small counterpoise corrections which indicates that the basis set superposition errors in the calculated shielding values are negligible. These are the first intermolecular shielding surfaces for Xe-molecule systems. The surfaces are highly anisotropic and can be described adequately by a sum of inverse even powers of the distance with explicit angle dependence in the coefficients expressed by Legendre polynomials P2n(cos θ), n=0-3, for Xe-CO2 and Xe-N2. The Xe-CO shielding surface is well described by a similar functional form, except that Pn(cos θ), n=0-4 were used. When averaged over the anisotropic potential function these shielding surfaces provide the second virial coefficient of the nuclear magnetic resonance (NMR) chemical shift observed in gas mixtures. The energies from the self-consistent field (SCF) calculations were used to construct potential surfaces, using a damped dispersion form. These potential functions are compared with existing potentials in their predictions of the second virial coefficients of NMR shielding, the pressure virial coefficients, the density coefficient of the mean-square torque from infrared absorption, and the rotational constants and other average properties of the van der Waals complexes. Average properties of the van der Waals complexes were obtained by quantum diffusion Monte Carlo solutions of the vibrational motion using the various potentials and compared with experiment.

  10. Chemical association in simple models of molecular and ionic fluids. III. The cavity function

    NASA Astrophysics Data System (ADS)

    Zhou, Yaoqi; Stell, George

    1992-01-01

    Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.

  11. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. An integrated theoretical and experimental investigation of insensitive munition compounds adsorption on cellulose, cellulose triacetate, chitin and chitosan surfaces.

    PubMed

    Gurtowski, Luke A; Griggs, Chris S; Gude, Veera G; Shukla, Manoj K

    2018-02-01

    This manuscript reports results of combined computational chemistry and batch adsorption investigation of insensitive munition compounds, 2,4-dinitroanisole (DNAN), triaminotrinitrobenzene (TATB), 1,1-diamino-2,2-dinitroethene (FOX-7) and nitroguanidine (NQ), and traditional munition compound 2,4,6-trinitrotoluene (TNT) on the surfaces of cellulose, cellulose triacetate, chitin and chitosan biopolymers. Cellulose, cellulose triacetate, chitin and chitosan were modeled as trimeric form of the linear chain of 4 C 1 chair conformation of β-d-glucopyranos, its triacetate form, β-N-acetylglucosamine and D-glucosamine, respectively, in the 1➔4 linkage. Geometries were optimized at the M062X functional level of the density functional theory (DFT) using the 6-31G(d,p) basis set in the gas phase and in the bulk water solution using the conductor-like polarizable continuum model (CPCM) approach. The nature of potential energy surfaces of the optimized geometries were ascertained through the harmonic vibrational frequency analysis. The basis set superposition error (BSSE) corrected interaction energies were obtained using the 6-311G(d,p) basis set at the same theoretical level. The computed BSSE in the gas phase was used to correct interaction energy in the bulk water solution. Computed and experimental results regarding the ability of considered surfaces in adsorbing the insensitive munitions compounds are discussed. Copyright © 2017. Published by Elsevier B.V.

  13. Method of Conjugate Radii for Solving Linear and Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1999-01-01

    This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.

  14. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress

    NASA Astrophysics Data System (ADS)

    Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-01

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  15. Optical recognition of statistical patterns

    NASA Astrophysics Data System (ADS)

    Lee, S. H.

    1981-12-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  16. Optical recognition of statistical patterns

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1981-01-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  17. Photoluminescence of radiation-induced color centers in lithium fluoride thin films for advanced diagnostics of proton beams

    NASA Astrophysics Data System (ADS)

    Piccinini, M.; Ambrosini, F.; Ampollini, A.; Picardi, L.; Ronsivalle, C.; Bonfigli, F.; Libera, S.; Nichelatti, E.; Vincenti, M. A.; Montereali, R. M.

    2015-06-01

    Systematic irradiation of thermally evaporated 0.8 μm thick polycrystalline lithium fluoride films on glass was performed by proton beams of 3 and 7 MeV energies, produced by a linear accelerator, in a fluence range from 1011 to 1015 protons/cm2. The visible photoluminescence spectra of radiation-induced F2 and F3+ laser active color centers, which possess almost overlapping absorption bands at about 450 nm, were measured under laser pumping at 458 nm. On the basis of simulations of the linear energy transfer with proton penetration depth in LiF, it was possible to obtain the behavior of the measured integrated photoluminescence intensity of proton irradiated LiF films as a function of the deposited dose. The photoluminescence signal is linearly dependent on the deposited dose in the interval from 103 to about 106 Gy, independently from the used proton energies. This behavior is very encouraging for the development of advanced solid state radiation detectors based on optically transparent LiF thin films for proton beam diagnostics and two-dimensional dose mapping.

  18. Computation of optimal output-feedback compensators for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.

    1972-01-01

    The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.

  19. The Structure, Design, and Closed-Loop Motion Control of a Differential Drive Soft Robot.

    PubMed

    Wu, Pang; Jiangbei, Wang; Yanqiong, Fei

    2018-02-01

    This article presents the structure, design, and motion control of an inchworm inspired pneumatic soft robot, which can perform differential movement. This robot mainly consists of two columns of pneumatic multi-airbags (actuators), one sensor, one baseboard, front feet, and rear feet. According to the different inflation time of left and right actuators, the robot can perform both linear and turning movements. The actuators of this robot are composed of multiple airbags, and the design of the airbags is analyzed. To deal with the nonlinear performance of the soft robot, we use radial basis function neural networks to train the turning ability of this robot on three different surfaces and create a mathematical model among coefficient of friction, deflection angle, and inflation time. Then, we establish the closed-loop automatic control model using three-axis electronic compass sensor. Finally, the automatic control model is verified by linear and turning movement experiments. According to the experiment, the robot can finish the linear and turning movements under the closed-loop control system.

  20. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress.

    PubMed

    Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-21

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  1. Typification of cider brandy on the basis of cider used in its manufacture.

    PubMed

    Rodríguez Madrera, Roberto; Mangas Alonso, Juan J

    2005-04-20

    A study of typification of cider brandies on the basis of the origin of the raw material used in their manufacture was conducted using chemometric techniques (principal component analysis, linear discriminant analysis, and Bayesian analysis) together with their composition in volatile compounds, as analyzed by gas chromatography with flame ionization to detect the major volatiles and by mass spectrometric to detect the minor ones. Significant principal components computed by a double cross-validation procedure allowed the structure of the database to be visualized as a function of the raw material, that is, cider made from fresh apple juice versus cider made from apple juice concentrate. Feasible and robust discriminant rules were computed and validated by a cross-validation procedure that allowed the authors to classify fresh and concentrate cider brandies, obtaining classification hits of >92%. The most discriminating variables for typifying cider brandies according to their raw material were 1-butanol and ethyl hexanoate.

  2. Comparative studies on molecular structure, vibrational spectra and hyperpolarizabilies of NLO chromophore Ethyl 4-Dimethylaminobenzoate

    NASA Astrophysics Data System (ADS)

    Amalanathan, M.; Jasmine, G. Femina; Roy, S. Dawn Dharma

    2017-08-01

    The molecular structure, vibrational spectra and polarizabilities of Ethyl 4-Dimethylaminobenzoate (EDAB) was investigated by density functional theory employing Becke's three parameter hybrid exchange functional with Lee-Yang-Parr (B3LYP) co-relational functional involving 6-311++G(d,p) basis set and compared with some other levels. A detailed interpretation of the IR and Raman spectra of EDBA have been reported and analyzed. Complete vibrational assignments of the vibrational modes have been done on the basis of the potential energy distribution (TED) using VEDA software. The molecular electrostatic potential mapped onto total density surface has been obtained. A study on the electronic properties, such as absorption wavelength, and frontier molecular orbitals energy, was performed using DFT approach. The stability of the molecule arising from hyper conjugative interactions and accompanying charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The natural and Mulliken charge also calculated and compared with different level of calculation. The dipole moment, polarizability and first, second order hyperpolarizabilities of the title molecule were calculated and compared with the experimental values. The energy gap between frontier orbitals has been used along with electric moments and first order hyperpolarizability, to understand the non linear optical (NLO) activity of the molecule. The NLO activity of molecule was confirmed by SHG analysis.

  3. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  4. Massively parallel and linear-scaling algorithm for second-order Møller-Plesset perturbation theory applied to the study of supramolecular wires

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul

    2017-03-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.

  5. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  6. Efficient Transition State Optimization of Periodic Structures through Automated Relaxed Potential Energy Surface Scans.

    PubMed

    Plessow, Philipp N

    2018-02-13

    This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.

  7. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  8. Extension of non-linear beam models with deformable cross sections

    NASA Astrophysics Data System (ADS)

    Sokolov, I.; Krylov, S.; Harari, I.

    2015-12-01

    Geometrically exact beam theory is extended to allow distortion of the cross section. We present an appropriate set of cross-section basis functions and provide physical insight to the cross-sectional distortion from linear elastostatics. The beam formulation in terms of material (back-rotated) beam internal force resultants and work-conjugate kinematic quantities emerges naturally from the material description of virtual work of constrained finite elasticity. The inclusion of cross-sectional deformation allows straightforward application of three-dimensional constitutive laws in the beam formulation. Beam counterparts of applied loads are expressed in terms of the original three-dimensional data. Special attention is paid to the treatment of the applied stress, keeping in mind applications such as hydrogel actuators under environmental stimuli or devices made of electroactive polymers. Numerical comparisons show the ability of the beam model to reproduce finite elasticity results with good efficiency.

  9. Theoretical study of the alkaline-earth metal superoxides BeO2 through SrO2

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry; Sodupe, Mariona; Langhoff, Stephen R.

    1992-01-01

    Three competing bonding mechanisms have been identified for the alkaline-earth metal superoxides: these result in a change in the optimal structure and ground state as the alkaline-earth metal becomes heavier. For example, BeO2 has a linear 3Sigma(-)g ground-state structure, whereas both CaO2 and SrO2 have C(2v)1A1 structures. For MgO2, the theoretical calculations are less definitive, as the 3A2 C(2v) structure is computed to lie only about 3 kcal/mol above the 3Sigma(-)g linear structure. The bond dissociation energies for the alkaline-earth metal superoxides have been computed using extensive Gaussian basis sets and treating electron correlation at the modified coupled-pair functional or coupled-cluster singles and doubles level with a perturbational estimate of the triple excitations.

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  11. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    PubMed

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  12. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  13. A multi-component nanocomposite screen-printed ink with non-linear touch sensitive electrical conductivity

    NASA Astrophysics Data System (ADS)

    Webb, Alexander J.; Szablewski, Marek; Bloor, David; Atkinson, Del; Graham, Adam; Laughlin, Paul; Lussey, David

    2013-04-01

    Printable electronics is an innovative area of technology with great commercial potential. Here, a screen-printed functional ink, comprising a combination of semiconducting acicular particles, electrically insulating nanoparticles and a base polymer ink, is described that exhibits pronounced pressure sensitive electrical properties for applications in sensing and touch sensitive surfaces. The combination of these components in the as-printed ink yield a complex structure and a large and reproducible touch pressure sensitive resistance range. In contrast to the case for some composite systems, the resistance changes occur down to applied pressures of 13 Pa. Current-voltage measurements at fixed pressures show monotonic non-linear behaviour, which becomes more Ohmic at higher pressures and in all cases shows some hysteresis. The physical basis for conduction, particularly in the low pressure regime, can be described in terms of field assisted quantum mechanical tunnelling.

  14. State of the art in electromagnetic modeling for the Compact Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, Arno; Kabel, Andreas; Lee, Lie-Quan

    SLAC's Advanced Computations Department (ACD) has developed the parallel 3D electromagnetic time-domain code T3P for simulations of wakefields and transients in complex accelerator structures. T3P is based on state-of-the-art Finite Element methods on unstructured grids and features unconditional stability, quadratic surface approximation and up to 6th-order vector basis functions for unprecedented simulation accuracy. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with fast turn-around times, aiding the design of the next generation of accelerator facilities. Applications include simulations of the proposed two-beam accelerator structures for the Compact Linear Collider (CLIC) - wakefieldmore » damping in the Power Extraction and Transfer Structure (PETS) and power transfer to the main beam accelerating structures are investigated.« less

  15. Design of a linear projector for use with the normal modes of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.

    1984-01-01

    The design of a linear projector for use with the normal modes of a model of atmospheric circulation is discussed. A central element in any normal mode initialization scheme is the process by which a set of data fields - winds, temperatures or geopotentials, and surface pressures - are expressed ("projected') in terms of the coefficients of a model's normal modes. This process is completely analogous to the Fourier decomposition of a single field (indeed a FFT applied in the zonal direction is a part of the process). Complete separability in all three spatial dimensions is assumed. The basis functions for the modal expansion are given. An important feature of the normal modes is their coupling of the structures of different fields, thus a coefficient in a normal mode expansion would contain both mass and momentum information.

  16. Matching by linear programming and successive convexification.

    PubMed

    Jiang, Hao; Drew, Mark S; Li, Ze-Nian

    2007-06-01

    We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.

  17. The role of service areas in the optimization of FSS orbital and frequency assignments

    NASA Technical Reports Server (NTRS)

    Levis, C. A.; Wang, C. W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.

    1985-01-01

    A relationship is derived, on a single-entry interference basis, for the minimum allowable spacing between two satellites as a function of electrical parameters and service-area geometries. For circular beams, universal curves relate the topocentric satellite spacing angle to the service-area separation angle measured at the satellite. The corresponding geocentric spacing depends only weakly on the mean longitude of the two satellites, and this is true also for alliptical antenna beams. As a consequence, if frequency channels are preassigned, the orbital assignment synthesis of a satellite system can be formulated as a mixed-integer programming (MIP) problem or approximated by a linear programming (LP) problem, with the interference protection requirements enforced by constraints while some linear function is optimized. Possible objective-function choices are discussed and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed ideal location set. A test problem is posed consisting of six service areas, each served by one satellite, all using elliptical antenna beams and the same frequency channels. Numerical results are given for the three ideal location prescriptions for both the MIP and LP formulations. The resulting scenarios also satisfy reasonable aggregate interference protection requirements.

  18. Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.

    ERIC Educational Resources Information Center

    Foytik, Jerry

    1981-01-01

    Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)

  19. The effect of mineral composition on the sorption of cesium ions on geological formations.

    PubMed

    Kónya, József; Nagy, Noémi M; Nemes, Zoltán

    2005-10-15

    The sorption of cesium-137 on rock samples, mainly on clay rocks, is determined as a function of the mineral composition of the rocks. A relation between the mineral groups (tectosilicates, phyllosilicates, clay minerals, carbonates) and their cesium sorption properties is shown. A linear model is constructed by which the distribution coefficients of the different minerals can be calculated from the mineral composition and the net distribution coefficient of the rock. On the basis of the distribution coefficients of the minerals the cesium sorption properties of other rocks can be predicted.

  20. First principle investigation of structural and optical properties of cubic titanium dioxide

    NASA Astrophysics Data System (ADS)

    Dash, Debashish; Chaudhury, Saurabh; Tripathy, Susanta K.

    2018-05-01

    This paper presents an analysis of structural and optical properties of cubic titanium dioxide (TiO2) using Orthogonalzed Linear Combinations of Atomic Orbitals (OLCAO) basis set under the framework of Density Functional Theory (DFT). The structural property, specially the lattice constant `a' and the optical properties such as refractive index, extinction coefficient, and reflectivity are investigated and discussed in the energy range of 0-16 eV. Further, the results have compared with previous theoretical as well as with experimental results. It was found that DFT based simulation results are approximation to experimental results.

  1. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  2. Variations in respiratory excretion of carbon dioxide can be used to calculate pulmonary blood flow.

    PubMed

    Preiss, David A; Azami, Takafumi; Urman, Richard D

    2015-02-01

    A non-invasive means of measuring pulmonary blood flow (PBF) would have numerous benefits in medicine. Traditionally, respiratory-based methods require breathing maneuvers, partial rebreathing, or foreign gas mixing because exhaled CO2 volume on a per-breath basis does not accurately represent alveolar exchange of CO2. We hypothesized that if the dilutional effect of the functional residual capacity was accounted for, the relationship between the calculated volume of CO2 removed per breath and the alveolar partial pressure of CO2 would be reversely linear. A computer model was developed that uses variable tidal breathing to calculate CO2 removal per breath at the level of the alveoli. We iterated estimates for functional residual capacity to create the best linear fit of alveolar CO2 pressure and CO2 elimination for 10 minutes of breathing and incorporated the volume of CO2 elimination into the Fick equation to calculate PBF. The relationship between alveolar pressure of CO2 and CO2 elimination produced an R(2) = 0.83. The optimal functional residual capacity differed from the "actual" capacity by 0.25 L (8.3%). The repeatability coefficient leveled at 0.09 at 10 breaths and the difference between the PBF calculated by the model and the preset blood flow was 0.62 ± 0.53 L/minute. With variations in tidal breathing, a linear relationship exists between alveolar CO2 pressure and CO2 elimination. Existing technology may be used to calculate CO2 elimination during quiet breathing and might therefore be used to accurately calculate PBF in humans with healthy lungs.

  3. A spline-based non-linear diffeomorphism for multimodal prostate registration.

    PubMed

    Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2012-08-01

    This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The neural basis of attaining conscious awareness of sad mood.

    PubMed

    Smith, Ryan; Braden, B Blair; Chen, Kewei; Ponce, Francisco A; Lane, Richard D; Baxter, Leslie C

    2015-09-01

    The neural processes associated with becoming aware of sad mood are not fully understood. We examined the dynamic process of becoming aware of sad mood and recovery from sad mood. Sixteen healthy subjects underwent fMRI while participating in a sadness induction task designed to allow for variable mood induction times. Individualized regressors linearly modeled the time periods during the attainment of self-reported sad and baseline "neutral" mood states, and the validity of the linearity assumption was further tested using independent component analysis. During sadness induction the dorsomedial and ventrolateral prefrontal cortices, and anterior insula exhibited a linear increase in the blood oxygen level-dependent (BOLD) signal until subjects became aware of a sad mood and then a subsequent linear decrease as subjects transitioned from sadness back to the non-sadness baseline condition. These findings extend understanding of the neural basis of conscious emotional experience.

  5. Total Water-Vapor Distribution in the Summer Cloudless Atmosphere over the South of Western Siberia

    NASA Astrophysics Data System (ADS)

    Troshkin, D. N.; Bezuglova, N. N.; Kabanov, M. V.; Pavlov, V. E.; Sokolov, K. I.; Sukovatov, K. Yu.

    2017-12-01

    The spatial distribution of the total water vapor in different climatic zones of the south of Western Siberia in summer of 2008-2011 is studied on the basis of Envisat data. The correlation analysis of the water-vapor time series from the Envisat data W and radiosonde observations w for the territory of Omsk aerological station show that the absolute values of W and w are linearly correlated with a coefficient of 0.77 (significance level p < 0.05). The distribution functions of the total water vapor are calculated based on the number of its measurements by Envisat for a cloudless sky of three zones with different physical properties of the underlying surface, in particular, steppes to the south of the Vasyugan Swamp and forests to the northeast of the Swamp. The distribution functions are bimodal; each mode follows the lognormal law. The parameters of these functions are given.

  6. Simulated quantum computation of molecular energies.

    PubMed

    Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin

    2005-09-09

    The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.

  7. Electrical cable utilization for wave energy converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Diana; Baca, Michael; Schenkman, Benjamin

    Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less

  8. An Affect-Centered Model of the Psyche and its Consequences for a New Understanding of Nonlinear Psychodynamics

    NASA Astrophysics Data System (ADS)

    Ciompi, Luc

    At variance with a purely cognitivistic approach, an affect-centered model of mental functioning called `fractal affect-logic' is presented on the basis of current emotional-psychological and neurobiological research. Functionally integrated feeling-thinking-behaving programs generated by action appear in this model as the basic `building blocks' of the psyche. Affects are understood as the essential source of energy that mobilises and organises both linear and nonlinear affective-cognitive dynamics, under the influence of appropriate control parameters and order parameters. Global patterns of affective-cognitive functioning form dissipative structures in the sense of Prigogine, with affect-specific attractors and repulsors, bifurcations, high sensitivity for initial conditions and a fractal overall structure that may be represented in a complex potential landscape of variable configuration. This concept opens new possibilities of understanding normal and pathological psychodynamics and sociodynamics, with numerous practical and theoretical implications.

  9. Electrical cable utilization for wave energy converters

    DOE PAGES

    Bull, Diana; Baca, Michael; Schenkman, Benjamin

    2018-04-27

    Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less

  10. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Can we detect a nonlinear response to temperature in European plant phenology?

    PubMed

    Jochner, Susanne; Sparks, Tim H; Laube, Julia; Menzel, Annette

    2016-10-01

    Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C -1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ∼14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.

  12. Advanced Mathematics Online: Assessing Particularities in the Online Delivery of a Second Linear Algebra Course

    ERIC Educational Resources Information Center

    Montiel, Mariana; Bhatti, Uzma

    2010-01-01

    This article presents an overview of some issues that were confronted when delivering an online second Linear Algebra course (assuming a previous Introductory Linear Algebra course) to graduate students enrolled in a Secondary Mathematics Education program. The focus is on performance in one particular aspect of the course: "change of basis" and…

  13. Generalized Fractional Derivative Anisotropic Viscoelastic Characterization.

    PubMed

    Hilton, Harry H

    2012-01-18

    Isotropic linear and nonlinear fractional derivative constitutive relations are formulated and examined in terms of many parameter generalized Kelvin models and are analytically extended to cover general anisotropic homogeneous or non-homogeneous as well as functionally graded viscoelastic material behavior. Equivalent integral constitutive relations, which are computationally more powerful, are derived from fractional differential ones and the associated anisotropic temperature-moisture-degree-of-cure shift functions and reduced times are established. Approximate Fourier transform inversions for fractional derivative relations are formulated and their accuracy is evaluated. The efficacy of integer and fractional derivative constitutive relations is compared and the preferential use of either characterization in analyzing isotropic and anisotropic real materials must be examined on a case-by-case basis. Approximate protocols for curve fitting analytical fractional derivative results to experimental data are formulated and evaluated.

  14. A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions

    NASA Technical Reports Server (NTRS)

    Huff, R. G.

    1984-01-01

    The equations of momentum annd continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in Earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.

  15. A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions

    NASA Technical Reports Server (NTRS)

    Huff, R. G.

    1984-01-01

    The equations of momentum and continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.

  16. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  17. Dynamic Kerr effect study on six-membered-ring molecular liquids: benzene, 1,3-cyclohexadiene, 1,4-cyclohexadiene, cyclohexene, and cyclohexane.

    PubMed

    Kakinuma, Shohei; Shirota, Hideaki

    2015-04-02

    The intermolecular dynamics of five six-membered-ring molecular liquids having different aromaticities-benzene, 1,3-cyclohexadiene, 1,4-cyclohexadiene, cyclohexene, and cyclohexane-measured by femtosecond Raman-induced Kerr effect spectroscopy have been compared in this study. The line shapes of the Fourier transform low-frequency spectra, which arise from the intermolecular vibrational dynamics, are trapezoidal for benzene and 1,3-cyclohexadiene, triangular for 1,4-cyclohexadiene and cyclohexene, and monomodal for cyclohexane. The trapezoidal shapes of the low-frequency spectra of benzene and 1,3-cyclohexadiene are due to the librational motions of their aromatic planar structures, which cause damped nuclear response features. The time integrals of the nuclear responses of the five liquids correlate to the squares of the polarizability anisotropies of the molecules calculated on the basis of density functional theory. The first moments of the low-frequency spectra roughly linearly correlate to the bulk parameters of the square roots of the surface tensions divided by the densities and the square roots of the surface tensions divided by the molecular weights, but the plots for cyclohexene deviate slightly from the correlations. The picosecond overdamped transients of the liquids are well fitted by a biexponential function. The fast time constants of all of the liquids are approximately 1.1-1.4 ps, and they do not obey the Stokes-Einstein-Debye hydrodynamic model. On the other hand, the slow time constants are roughly linearly proportional to the products of the shear viscosities and the molar volumes. The observed intramolecular vibrational modes at less than 700 cm(-1) for all of the liquids are also assigned on the basis of quantum chemistry calculations.

  18. A digital terrain model of bathymetry and shallow-zone bottom-substrate classification for Spednic Lake and estimates of lake-level-dependent habitat to support smallmouth bass persistence modeling

    USGS Publications Warehouse

    Dudley, Robert W.; Schalk, Charles W.; Stasulis, Nicholas W.; Trial, Joan G.

    2011-01-01

    In 2009, the U.S. Geological Survey entered into a cooperative agreement with the International Joint Commission, St. Croix River Board to do an analysis of historical smallmouth bass habitat as a function of lake level for Spednic Lake in an effort to quantify the effects, if any, of historical lake-level management and meteorological conditions (from 1970 to 2009) on smallmouth bass year-class failure. The analysis requires estimating habitat availability as a function of lake level during spawning periods from 1970 to 2009, which is documented in this report. Field work was done from October 19 to 23, and from November 2 to 10, 2009, to acquire acoustic bathymetric (depth) data and acoustic data indicating the character of the surficial lake-bottom sediments. Historical lake-level data during smallmouth bass spawning (May-June) were applied to the bathymetric and surficial-sediment type data sets to produce annual historic estimates of smallmouth-bass-spawning-habitat area. Results show that minimum lake level during the spawning period explained most of the variability (R2 = 0.89) in available spawning habitat for nearshore areas of shallow slope (less than 10 degrees) on the basis of linear correlation. The change in lake level during the spawning period explained most of the variability (R2 = 0.90) in available spawning habitat for areas of steeper slopes (10 to 40 degrees) on the basis of linear correlation. The next step in modeling historic smallmouth bass year-class persistence is to combine this analysis of the effects of lake-level management on habitat availability with meteorological conditions.

  19. Materials prediction via classification learning

    DOE PAGES

    Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; ...

    2015-08-25

    In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturallymore » uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. In conclusion, our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle.« less

  20. Recurrence formulas for fully exponentially correlated four-body wave functions

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.

    2009-03-01

    Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.

  1. Materials Prediction via Classification Learning

    PubMed Central

    Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; Lookman, Turab

    2015-01-01

    In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturally uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. Our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle. PMID:26304800

  2. Molecular structure, vibrational spectra, NLO and MEP analysis of bis[2-hydroxy-кO-N-(2-pyridyl)-1-naphthaldiminato-кN]zinc(II)

    NASA Astrophysics Data System (ADS)

    Tanak, Hasan; Toy, Mehmet

    2013-11-01

    The molecular geometry and vibrational frequencies of bis[2-hydroxy-кO-N-(2-pyridyl)-1-naphthaldiminato-кN]zinc(II) in the ground state have been calculated by using the Hartree-Fock (HF) and density functional method (B3LYP) with 6-311G(d,p) basis set. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. The energetic and atomic charge behavior of the title compound in solvent media has been examined by applying the Onsager and the polarizable continuum model. To investigate second order nonlinear optical properties of the title compound, the electric dipole (μ), linear polarizability (α) and first-order hyperpolarizability (β) were computed using the density functional B3LYP and CAM-B3LYP methods with the 6-31+G(d) basis set. According to our calculations, the title compound exhibits nonzero (β) value revealing second order NLO behavior. In addition, DFT calculations of the title compound, molecular electrostatic potential (MEP), frontier molecular orbitals, and thermodynamic properties were performed at B3LYP/6-311G(d,p) level of theory.

  3. Proper Orthogonal Decomposition in Optimal Control of Fluids

    NASA Technical Reports Server (NTRS)

    Ravindran, S. S.

    1999-01-01

    In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.

  4. Isogeometric analysis of free-form Timoshenko curved beams including the nonlinear effects of large deformations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Farhad; Hashemian, Ali; Moetakef-Imani, Behnam; Hadidimoud, Saied

    2018-03-01

    In the present paper, the isogeometric analysis (IGA) of free-form planar curved beams is formulated based on the nonlinear Timoshenko beam theory to investigate the large deformation of beams with variable curvature. Based on the isoparametric concept, the shape functions of the field variables (displacement and rotation) in a finite element analysis are considered to be the same as the non-uniform rational basis spline (NURBS) basis functions defining the geometry. The validity of the presented formulation is tested in five case studies covering a wide range of engineering curved structures including from straight and constant curvature to variable curvature beams. The nonlinear deformation results obtained by the presented method are compared to well-established benchmark examples and also compared to the results of linear and nonlinear finite element analyses. As the nonlinear load-deflection behavior of Timoshenko beams is the main topic of this article, the results strongly show the applicability of the IGA method to the large deformation analysis of free-form curved beams. Finally, it is interesting to notice that, until very recently, the large deformations analysis of free-form Timoshenko curved beams has not been considered in IGA by researchers.

  5. A diagnostic analysis of the VVP single-doppler retrieval technique

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis J.

    1995-01-01

    A diagnostic analysis of the VVP (volume velocity processing) retrieval method is presented, with emphasis on understanding the technique as a linear, multivariate regression. Similarities and differences to the velocity-azimuth display and extended velocity-azimuth display retrieval techniques are discussed, using this framework. Conventional regression diagnostics are then employed to quantitatively determine situations in which the VVP technique is likely to fail. An algorithm for preparation and analysis of a robust VVP retrieval is developed and applied to synthetic and actual datasets with high temporal and spatial resolution. A fundamental (but quantifiable) limitation to some forms of VVP analysis is inadequate sampling dispersion in the n space of the multivariate regression, manifest as a collinearity between the basis functions of some fitted parameters. Such collinearity may be present either in the definition of these basis functions or in their realization in a given sampling configuration. This nonorthogonality may cause numerical instability, variance inflation (decrease in robustness), and increased sensitivity to bias from neglected wind components. It is shown that these effects prevent the application of VVP to small azimuthal sectors of data. The behavior of the VVP regression is further diagnosed over a wide range of sampling constraints, and reasonable sector limits are established.

  6. Ab initio calculations of optical properties of silver clusters: cross-over from molecular to nanoscale behavior

    NASA Astrophysics Data System (ADS)

    Titantah, John T.; Karttunen, Mikko

    2016-05-01

    Electronic and optical properties of silver clusters were calculated using two different ab initio approaches: (1) based on all-electron full-potential linearized-augmented plane-wave method and (2) local basis function pseudopotential approach. Agreement is found between the two methods for small and intermediate sized clusters for which the former method is limited due to its all-electron formulation. The latter, due to non-periodic boundary conditions, is the more natural approach to simulate small clusters. The effect of cluster size is then explored using the local basis function approach. We find that as the cluster size increases, the electronic structure undergoes a transition from molecular behavior to nanoparticle behavior at a cluster size of 140 atoms (diameter ~1.7 nm). Above this cluster size the step-like electronic structure, evident as several features in the imaginary part of the polarizability of all clusters smaller than Ag147, gives way to a dominant plasmon peak localized at wavelengths 350 nm ≤ λ ≤ 600 nm. It is, thus, at this length-scale that the conduction electrons' collective oscillations that are responsible for plasmonic resonances begin to dominate the opto-electronic properties of silver nanoclusters.

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  8. The molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory: The DEC-RI-MP2 gradient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bykov, Dmytro; Kristensen, Kasper; Kjærgaard, Thomas

    We report an implementation of the molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory (DEC-RI-MP2). The new DEC-RI-MP2 gradient method combines the precision control as well as the linear-scaling and massively parallel features of the DEC scheme with efficient evaluations of the gradient contributions using the RI approximation. We further demonstrate that the DEC-RI-MP2 gradient method is capable of calculating molecular gradients for very large molecular systems. A test set of supramolecular complexes containing up to 158 atoms and 1960 contracted basis functions has been employed to demonstrate the general applicability of the DEC-RI-MP2 methodmore » and to analyze the errors of the DEC approximation. Moreover, the test set contains molecules of complicated electronic structures and is thus deliberately chosen to stress test the DEC-RI-MP2 gradient implementation. Additionally, as a showcase example the full molecular gradient for insulin (787 atoms and 7604 contracted basis functions) has been evaluated.« less

  9. Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.

    PubMed

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E

    2018-03-01

    Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.

  10. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  11. First-principles calculations on the four phases of BaTiO3.

    PubMed

    Evarestov, Robert A; Bandura, Andrei V

    2012-04-30

    The calculations based on linear combination of atomic orbitals basis functions as implemented in CRYSTAL09 computer code have been performed for cubic, tetragonal, orthorhombic, and rhombohedral modifications of BaTiO(3) crystal. Structural and electronic properties as well as phonon frequencies were obtained using local density approximation, generalized gradient approximation, and hybrid exchange-correlation density functional theory (DFT) functionals for four stable phases of BaTiO(3). A comparison was made between the results of different DFT techniques. It is concluded that the hybrid PBE0 [J. P. Perdew, K. Burke, M. Ernzerhof, J. Chem. Phys. 1996, 105, 9982.] functional is able to predict correctly the structural stability and phonon properties both for cubic and ferroelectric phases of BaTiO(3). The comparative phonon symmetry analysis in BaTiO(3) four phases has been made basing on the site symmetry and irreducible representation indexes for the first time. Copyright © 2012 Wiley Periodicals, Inc.

  12. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less

  13. Semiempirical Theories of the Affinities of Negative Atomic Ions

    NASA Technical Reports Server (NTRS)

    Edie, John W.

    1961-01-01

    The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.

  14. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  15. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  16. Computing Gröbner Bases within Linear Algebra

    NASA Astrophysics Data System (ADS)

    Suzuki, Akira

    In this paper, we present an alternative algorithm to compute Gröbner bases, which is based on computations on sparse linear algebra. Both of S-polynomial computations and monomial reductions are computed in linear algebra simultaneously in this algorithm. So it can be implemented to any computational system which can handle linear algebra. For a given ideal in a polynomial ring, it calculates a Gröbner basis along with the corresponding term order appropriately.

  17. Boronlectin/Polyelectrolyte Ensembles as Artificial Tongue: Design, Construction, and Application for Discriminative Sensing of Complex Glycoconjugates from Panax ginseng.

    PubMed

    Zhang, Xiao-Tai; Wang, Shu; Xing, Guo-Wen

    2017-02-01

    Ginsenoside is a large family of triterpenoid saponins from Panax ginseng, which possesses various important biological functions. Due to the very similar structures of these complex glycoconjugates, it is crucial to develop a powerful analytic method to identify ginsenosides qualitatively or quantitatively. We herein report an eight-channel fluorescent sensor array as artificial tongue to achieve the discriminative sensing of ginsenosides. The fluorescent cross-responsive array was constructed by four boronlectins bearing flexible boronic acid moieties (FBAs) with multiple reactive sites and two linear poly(phenylene-ethynylene) (PPEs). An "on-off-on" response pattern was afforded on the basis of superquenching of fluorescent indicator PPEs and an analyte-induced allosteric indicator displacement (AID) process. Most importantly, it was found that the canonical distribution of ginsenoside data points analyzed by linear discriminant analysis (LDA) was highly correlated with the inherent molecular structures of the analytes, and the absence of overlaps among the five point groups reflected the effectiveness of the sensor array in the discrimination process. Almost all of the unknown ginsenoside samples at different concentrations were correctly identified on the basis of the established mathematical model. Our current work provided a general and constructive method to improve the quality assessment and control of ginseng and its extracts, which are useful and helpful for further discriminating other complex glycoconjugate families.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.

    Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less

  19. The accuracy of the Gaussian-and-finite-element-Coulomb (GFC) method for the calculation of Coulomb integrals.

    PubMed

    Przybytek, Michal; Helgaker, Trygve

    2013-08-07

    We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.

  20. Tensor products of U{sub q}{sup Prime }sl-caret(2)-modules and the big q{sup 2}-Jacobi function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gade, R. M.

    2013-01-15

    Four tensor products of evaluation modules of the quantum affine algebra U{sub q}{sup Prime }sl-caret(2) obtained from the negative and positive series, the complementary and the strange series representations are investigated. Linear operators R(z) satisfying the intertwining property on finite linear combinations of the canonical basis elements of the tensor products are described in terms of two sets of infinite sums {l_brace}{tau}{sup (r,t)}{r_brace}{sub r,t Element-Of Z{sub {>=}{sub 0}}} and {l_brace}{tau}{sup (r,t)}{r_brace}{sub r,t Element-Of Z{sub {>=}{sub 0}}} involving big q{sup 2}-Jacobi functions or related nonterminating basic hypergeometric series. Inhomogeneous recurrence relations can be derived for both sets. Evaluations of the simplestmore » sums provide the corresponding initial conditions. For the first set of sums the relations entail a big q{sup 2}-Jacobi function transform pair. An integral decomposition is obtained for the sum {tau}{sup (r,t)}. A partial description of the relation between the decompositions of the tensor products with respect to U{sub q}sl(2) or with respect to its complement in U{sub q}{sup Prime }sl-caret(2) can be formulated in terms of Askey-Wilson function transforms. For a particular combination of two tensor products, the occurrence of proper U{sub q}{sup Prime }sl-caret(2)-submodules is discussed.« less

  1. The relationship between perceived discomfort of static posture holding and posture holding time.

    PubMed

    Ogutu, Jack; Park, Woojin

    2015-01-01

    Few studies have investigated mathematical characteristics of the discomfort-time relationship during prolonged static posture holding (SPH) on an individual basis. Consequently, the discomfort-time relationship is not clearly understood at individual trial level. The objective of this study was to examine discomfort-time sequence data obtained from a large number of maximum-duration SPH trials to understand the perceived discomfort-posture holding time relationship at the individual SPH trial level. Thirty subjects (15 male, 15 female) participated in this study as paid volunteers. The subjects performed maximum-duration SPH trials employing 12 different wholebody static postures. The hand-held load for all the task trials was a ``generic'' box weighing 2 kg. Three mathematical functions, that is, linear, logarithmic and power functions were examined as possible mathematical models for representing individual discomfort-time profiles of SPH trials. Three different time increase patterns (negatively accelerated, linear and positively accelerated) were observed in the discomfort-time sequences data. The power function model with an additive constant term was found to adequately fit most (96.4%) of the observed discomfort-time sequences, and thus, was recommended as a general mathematical representation of the perceived discomfort-posture holding time relationship in SPH. The new knowledge on the nature of the discomfort-time relationship in SPH and the power function representation found in this study will facilitate analyzing discomfort-time data of SPH and developing future posture analysis tools for work-related discomfort control.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimpe, T; Marchessoux, C; Rostang, J

    Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less

  3. Diffusive sensitivity to muscle architecture: a magnetic resonance diffusion tensor imaging study of the human calf.

    PubMed

    Galbán, Craig J; Maderwald, Stefan; Uffmann, Kai; de Greiff, Armin; Ladd, Mark E

    2004-12-01

    The aim of this study was to examine the diffusive properties of adjacent muscles at rest, and to determine the relationship between diffusive and architectural properties, which are task-specific to muscles. The principle, second, and third eigenvalues, trace of the diffusion tensor, and two anisotropic parameters, ellipsoid eccentricity (e) and fractional anisotropy (FA), of various muscles in the human calf were calculated by diffusion tensor imaging (DTI). Linear correlations of the calculated parameters to the muscle physiological cross-sectional area (PCSA), which is proportional to maximum muscle force, were performed to ascertain any linear relation between muscle architecture and diffusivity. Images of the left calf were acquired from six healthy male volunteers. Seven muscles were investigated in this study. These comprised the soleus, lateral gastrocnemius, medial gastrocnemius, posterior tibialis, anterior tibialis, extensor digitorum longus, and peroneus longus. All data were presented as the mean and standard error of the mean (SEM). In general, differences in diffusive parameter values occurred primarily between functionally different muscles. A strong correlation was also found between PCSA and the third eigenvalue, e, and FA. A mathematical derivation revealed a linear relationship between PCSA and the third eigenvalue as a result of their dependence on the average radius of all fibers within a single muscle. These findings demonstrated the ability of DTI to differentiate between functionally different muscles in the same region of the body on the basis of their diffusive properties.

  4. Registration-based interpolation applied to cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael S.; Lyksborg, Mark; Hansen, Mads Fogtmann; Darkner, Sune; Larsen, Rasmus

    2010-03-01

    Various approaches have been proposed for segmentation of cardiac MRI. An accurate segmentation of the myocardium and ventricles is essential to determine parameters of interest for the function of the heart, such as the ejection fraction. One problem with MRI is the poor resolution in one dimension. A 3D registration algorithm will typically use a trilinear interpolation of intensities to determine the intensity of a deformed template image. Due to the poor resolution across slices, such linear approximation is highly inaccurate since the assumption of smooth underlying intensities is violated. Registration-based interpolation is based on 2D registrations between adjacent slices and is independent of segmentations. Hence, rather than assuming smoothness in intensity, the assumption is that the anatomy is consistent across slices. The basis for the proposed approach is the set of 2D registrations between each pair of slices, both ways. The intensity of a new slice is then weighted by (i) the deformation functions and (ii) the intensities in the warped images. Unlike the approach by Penney et al. 2004, this approach takes into account deformation both ways, which gives more robustness where correspondence between slices is poor. We demonstrate the approach on a toy example and on a set of cardiac CINE MRI. Qualitative inspection reveals that the proposed approach provides a more convincing transition between slices than images obtained by linear interpolation. A quantitative validation reveals significantly lower reconstruction errors than both linear and registration-based interpolation based on one-way registrations.

  5. Direct Nitrous Oxide Emissions From Tropical And Sub-Tropical Agricultural Systems - A Review And Modelling Of Emission Factors.

    PubMed

    Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B; Brentrup, Frank; Stirling, Clare; Hillier, Jon

    2017-03-10

    There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N 2 O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N 2 O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N 2 O-N emissions to estimate tropic-specific annual N 2 O emission factors (N 2 O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N 2 O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N 2 O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central &South America. Our annual N 2 O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N 2 O-EFs as a function of applied N. Our findings highlight that in reporting annual N 2 O emissions and estimating N 2 O-EFs, particular attention should be paid in modelling the effect of study length on response of N 2 O.

  6. Direct Nitrous Oxide Emissions From Tropical And Sub-Tropical Agricultural Systems - A Review And Modelling Of Emission Factors

    PubMed Central

    Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B.; Brentrup, Frank; Stirling, Clare; Hillier, Jon

    2017-01-01

    There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N2O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N2O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N2O-N emissions to estimate tropic-specific annual N2O emission factors (N2O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N2O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N2O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central & South America. Our annual N2O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N2O-EFs as a function of applied N. Our findings highlight that in reporting annual N2O emissions and estimating N2O-EFs, particular attention should be paid in modelling the effect of study length on response of N2O. PMID:28281637

  7. Direct Nitrous Oxide Emissions From Tropical And Sub-Tropical Agricultural Systems - A Review And Modelling Of Emission Factors

    NASA Astrophysics Data System (ADS)

    Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B.; Brentrup, Frank; Stirling, Clare; Hillier, Jon

    2017-03-01

    There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N2O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N2O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N2O-N emissions to estimate tropic-specific annual N2O emission factors (N2O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N2O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N2O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central & South America. Our annual N2O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N2O-EFs as a function of applied N. Our findings highlight that in reporting annual N2O emissions and estimating N2O-EFs, particular attention should be paid in modelling the effect of study length on response of N2O.

  8. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  9. Massively parallel and linear-scaling algorithm for second-order Moller–Plesset perturbation theory applied to the study of supramolecular wires

    DOE PAGES

    Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...

    2016-11-16

    Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less

  10. Interaction effects in Aharonov-Bohm-Kondo rings

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Yoshii, Ryosuke; Affleck, Ian

    2013-12-01

    We study the conductance through an Aharonov-Bohm ring, containing a quantum dot in the Kondo regime in one arm, at finite temperature and arbitrary electronic density. We develop a general method for this calculation based on changing the basis to the screening and nonscreening channels. We show that an unusual term appears in the conductance, involving the connected four-point Green's function of the conduction electrons. However, this term and the terms quadratic in the T matrix can be eliminated at sufficiently low temperatures, leading to an expression for the conductance linear in the Kondo T matrix. Explicit results are given for temperatures that are high compared to the Kondo temperature.

  11. A stability theorem for energy-balance climate models

    NASA Technical Reports Server (NTRS)

    Cahalan, R. F.; North, G. R.

    1979-01-01

    The paper treats the stability of steady-state solutions of some simple, latitude-dependent, energy-balance climate models. For north-south symmetric solutions of models with an ice-cap-type albedo feedback, and for the sum of horizontal transport and infrared radiation given by a linear operator, it is possible to prove a 'slope stability' theorem, i.e., if the local slope of the steady-state iceline latitude versus solar constant curve is positive (negative) the steady-state solution is stable (unstable). Certain rather weak restrictions on the albedo function and on the heat transport are required for the proof, and their physical basis is discussed.

  12. RELATIVISTIC CYCLOTRON INSTABILITY IN ANISOTROPIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López, Rodrigo A.; Moya, Pablo S.; Muñoz, Víctor

    2016-11-20

    A sufficiently large temperature anisotropy can sometimes drive various types of electromagnetic plasma micro-instabilities, which can play an important role in the dynamics of relativistic pair plasmas in space, astrophysics, and laboratory environments. Here, we provide a detailed description of the cyclotron instability of parallel propagating electromagnetic waves in relativistic pair plasmas on the basis of a relativistic anisotropic distribution function. Using plasma kinetic theory and particle-in-cell simulations, we study the influence of the relativistic temperature and the temperature anisotropy on the collective and noncollective modes of these plasmas. Growth rates and dispersion curves from the linear theory show amore » good agreement with simulations results.« less

  13. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  14. LAPR: An experimental aircraft pushbroom scanner

    NASA Technical Reports Server (NTRS)

    Wharton, S. W.; Irons, J. I.; Heugel, F.

    1980-01-01

    A three band Linear Array Pushbroom Radiometer (LAPR) was built and flown on an experimental basis by NASA at the Goddard Space Flight Center. The functional characteristics of the instrument and the methods used to preprocess the data, including radiometric correction, are described. The radiometric sensitivity of the instrument was tested and compared to that of the Thematic Mapper and the Multispectral Scanner. The radiometric correction procedure was evaluated quantitatively, using laboratory testing, and qualitatively, via visual examination of the LAPR test flight imagery. Although effective radiometric correction could not yet be demonstrated via laboratory testing, radiometric distortion did not preclude the visual interpretation or parallel piped classification of the test imagery.

  15. Mesh-free based variational level set evolution for breast region segmentation and abnormality detection using mammograms.

    PubMed

    Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George

    2018-01-01

    Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  17. Electroconvulsive therapy selectively enhanced feedforward connectivity from fusiform face area to amygdala in major depressive disorder.

    PubMed

    Wang, Jiaojian; Wei, Qiang; Bai, Tongjian; Zhou, Xiaoqin; Sun, Hui; Becker, Benjamin; Tian, Yanghua; Wang, Kai; Kendrick, Keith

    2017-12-01

    Electroconvulsive therapy (ECT) has been widely used to treat the major depressive disorder (MDD), especially for treatment-resistant depression. However, the neuroanatomical basis of ECT remains an open problem. In our study, we combined the voxel-based morphology (VBM), resting-state functional connectivity (RSFC) and granger causality analysis (GCA) to identify the longitudinal changes of structure and function in 23 MDD patients before and after ECT. In addition, multivariate pattern analysis using linear support vector machine (SVM) was applied to classify 23 depressed patients from 25 gender, age and education matched healthy controls. VBM analysis revealed the increased gray matter volume of left superficial amygdala after ECT. The following RSFC and GCA analyses further identified the enhanced functional connectivity between left amygdala and left fusiform face area (FFA) and effective connectivity from FFA to amygdala after ECT, respectively. Moreover, SVM-based classification achieved an accuracy of 83.33%, a sensitivity of 82.61% and a specificity of 84% by leave-one-out cross-validation. Our findings indicated that ECT may facilitate the neurogenesis of amygdala and selectively enhance the feedforward cortical-subcortical connectivity from FFA to amygdala. This study may shed new light on the pathological mechanism of MDD and may provide the neuroanatomical basis for ECT. © The Author (2017). Published by Oxford University Press.

  18. Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane

    NASA Astrophysics Data System (ADS)

    Shin, Ki Hoon; Park, Youngjin

    Human's ability to perceive elevation of a sound and distinguish whether a sound is coming from the front or rear strongly depends on the monaural spectral features of the pinnae. In order to realize an effective virtual auditory display by HRTF (head-related transfer function) customization, the pinna responses were isolated from the median HRIRs (head-related impulse responses) of 45 individual HRIRs in the CIPIC HRTF database and modeled as linear combinations of 4 or 5 basic temporal shapes (basis functions) per each elevation on the median plane by PCA (principal components analysis) in the time domain. By tuning the weight of each basis function computed for a specific height to replace the pinna response in the KEMAR HRIR at the same height with the resulting customized pinna response and listening to the filtered stimuli over headphones, 4 individuals with normal hearing sensitivity were able to create a set of HRIRs that outperformed the KEMAR HRIRs in producing vertical effects with reduced front/back ambiguity in the median plane. Since the monaural spectral features of the pinnae are almost independent of azimuthal variation of the source direction, similar vertical effects could also be generated at different azimuthal directions simply by varying the ITD (interaural time difference) according to the direction as well as the size of each individual's own head.

  19. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  20. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  1. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Determining conformational order and crystallinity in polycaprolactone via Raman spectroscopy

    PubMed Central

    Kotula, Anthony P.; Snyder, Chad R.; Migler, Kalman B.

    2017-01-01

    Raman spectroscopy is a popular method for non-invasive analysis of biomaterials containing polycaprolactone in applications such as tissue engineering and drug delivery. However there remain fundamental challenges in interpretation of such spectra in the context of existing dielectric spectroscopy and differential scanning calorimetry results in both the melt and semi-crystalline states. In this work, we develop a thermodynamically informed analysis method which utilizes basis spectra – ideal spectra of the polymer chain conformers comprising the measured Raman spectrum. In polycaprolactone we identify three basis spectra in the carbonyl region; measurement of their temperature dependence shows that one is linearly proportional to crystallinity, a second correlates with dipole-dipole interactions that are observed in dielectric spectroscopy and a third which correlates with amorphous chain behavior. For other spectral regions, e.g. C-COO stretch, a comparison of the basis spectra to those from density functional theory calculations in the all-trans configuration allows us to indicate whether sharp spectral peaks can be attributed to single chain modes in the all-trans state or to crystalline order. Our analysis method is general and should provide important insights to other polymeric materials. PMID:28824207

  3. Qudit-Basis Universal Quantum Computation Using χ^{(2)} Interactions.

    PubMed

    Niu, Murphy Yuezhen; Chuang, Isaac L; Shapiro, Jeffrey H

    2018-04-20

    We prove that universal quantum computation can be realized-using only linear optics and χ^{(2)} (three-wave mixing) interactions-in any (n+1)-dimensional qudit basis of the n-pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ^{(2)} Hamiltonians and photon-number operators generate the full u(3) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ^{(2)} interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ^{(2)} interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.

  4. Qudit-Basis Universal Quantum Computation Using χ(2 ) Interactions

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-04-01

    We prove that universal quantum computation can be realized—using only linear optics and χ(2 ) (three-wave mixing) interactions—in any (n +1 )-dimensional qudit basis of the n -pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ(2 ) Hamiltonians and photon-number operators generate the full u (3 ) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ(2 ) interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ(2 ) interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.

  5. Linear-scaling explicitly correlated treatment of solids: periodic local MP2-F12 method.

    PubMed

    Usvyat, Denis

    2013-11-21

    Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.

  6. Modeling exposure–lag–response associations with distributed lag non-linear models

    PubMed Central

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  7. Construction accident narrative classification: An evaluation of text mining techniques.

    PubMed

    Goh, Yang Miang; Ubeynarayana, C U

    2017-11-01

    Learning from past accidents is fundamental to accident prevention. Thus, accident and near miss reporting are encouraged by organizations and regulators. However, for organizations managing large safety databases, the time taken to accurately classify accident and near miss narratives will be very significant. This study aims to evaluate the utility of various text mining classification techniques in classifying 1000 publicly available construction accident narratives obtained from the US OSHA website. The study evaluated six machine learning algorithms, including support vector machine (SVM), linear regression (LR), random forest (RF), k-nearest neighbor (KNN), decision tree (DT) and Naive Bayes (NB), and found that SVM produced the best performance in classifying the test set of 251 cases. Further experimentation with tokenization of the processed text and non-linear SVM were also conducted. In addition, a grid search was conducted on the hyperparameters of the SVM models. It was found that the best performing classifiers were linear SVM with unigram tokenization and radial basis function (RBF) SVM with uni-gram tokenization. In view of its relative simplicity, the linear SVM is recommended. Across the 11 labels of accident causes or types, the precision of the linear SVM ranged from 0.5 to 1, recall ranged from 0.36 to 0.9 and F1 score was between 0.45 and 0.92. The reasons for misclassification were discussed and suggestions on ways to improve the performance were provided. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Influence of stress interaction on the behavior of off-axis unidirectional composites

    NASA Technical Reports Server (NTRS)

    Pindera, M. J.; Herakovich, C. T.

    1980-01-01

    The yield function for plane stress of a transversely isotropic composite lamina consisting of stiff, linearly elastic fibers and a von Mises matrix material is formulated in terms of Hill's elastic stress concentration factors and a single plastic constraint parameter. The above are subsequently evaluated on the basis of observed average lamina and constituent response for the Avco 5505 boron epoxy system. It is shown that inclusion of residual stresses in the yield function together with the incorporation of Dubey and Hillier's concept of generalized yield stress for anisotropic media in the constitutive equation correctly predicts the trends observed in experiments. The incorporation of the strong axial stress interaction necessary to predict the correct trends in the shear response is directly traced to the high residual axial stresses in the matrix induced during fabrication of the composite.

  9. Application of the exact exchange potential method for half metallic intermediate band alloy semiconductor.

    PubMed

    Fernández, J J; Tablero, C; Wahnón, P

    2004-06-08

    In this paper we present an analysis of the convergence of the band structure properties, particularly the influence on the modification of the bandgap and bandwidth values in half metallic compounds by the use of the exact exchange formalism. This formalism for general solids has been implemented using a localized basis set of numerical functions to represent the exchange density. The implementation has been carried out using a code which uses a linear combination of confined numerical pseudoatomic functions to represent the Kohn-Sham orbitals. The application of this exact exchange scheme to a half-metallic semiconductor compound, in particular to Ga(4)P(3)Ti, a promising material in the field of high efficiency solar cells, confirms the existence of the isolated intermediate band in this compound. (c) 2004 American Institute of Physics.

  10. TD-CI simulation of the electronic optical response of molecules in intense fields II: comparison of DFT functionals and EOM-CCSD.

    PubMed

    Sonk, Jason A; Schlegel, H Bernhard

    2011-10-27

    Time-dependent configuration interaction (TD-CI) simulations can be used to simulate molecules in intense laser fields. TD-CI calculations use the excitation energies and transition dipoles calculated in the absence of a field. The EOM-CCSD method provides a good estimate of the field-free excited states but is rather expensive. Linear-response time-dependent density functional theory (TD-DFT) is an inexpensive alternative for computing the field-free excitation energies and transition dipoles needed for TD-CI simulations. Linear-response TD-DFT calculations were carried out with standard functionals (B3LYP, BH&HLYP, HSE2PBE (HSE03), BLYP, PBE, PW91, and TPSS) and long-range corrected functionals (LC-ωPBE, ωB97XD, CAM-B3LYP, LC-BLYP, LC-PBE, LC-PW91, and LC-TPSS). These calculations used the 6-31G(d,p) basis set augmented with three sets of diffuse sp functions on each heavy atom. Butadiene was employed as a test case, and 500 excited states were calculated with each functional. Standard functionals yield average excitation energies that are significantly lower than the EOM-CC, while long-range corrected functionals tend to produce average excitation energies slightly higher. Long-range corrected functionals also yield transition dipoles that are somewhat larger than EOM-CC on average. The TD-CI simulations were carried out with a three-cycle Gaussian pulse (ω = 0.06 au, 760 nm) with intensities up to 1.26 × 10(14) W cm(-2) directed along the vector connecting the end carbons. The nonlinear response as indicated by the residual populations of the excited states after the pulse is far too large with standard functionals, primarily because the excitation energies are too low. The LC-ωPBE, LC-PBE, LC-PW91, and LC-TPSS long-range corrected functionals produce responses comparable to EOM-CC.

  11. Identifying Neural Patterns of Functional Dyspepsia Using Multivariate Pattern Analysis: A Resting-State fMRI Study

    PubMed Central

    Liu, Peng; Qin, Wei; Wang, Jingjing; Zeng, Fang; Zhou, Guangyu; Wen, Haixia; von Deneen, Karen M.; Liang, Fanrong; Gong, Qiyong; Tian, Jie

    2013-01-01

    Background Previous imaging studies on functional dyspepsia (FD) have focused on abnormal brain functions during special tasks, while few studies concentrated on the resting-state abnormalities of FD patients, which might be potentially valuable to provide us with direct information about the neural basis of FD. The main purpose of the current study was thereby to characterize the distinct patterns of resting-state function between FD patients and healthy controls (HCs). Methodology/Principal Findings Thirty FD patients and thirty HCs were enrolled and experienced 5-mintue resting-state scanning. Based on the support vector machine (SVM), we applied multivariate pattern analysis (MVPA) to investigate the differences of resting-state function mapped by regional homogeneity (ReHo). A classifier was designed by using the principal component analysis and the linear SVM. Permutation test was then employed to identify the significant contribution to the final discrimination. The results displayed that the mean classifier accuracy was 86.67%, and highly discriminative brain regions mainly included the prefrontal cortex (PFC), orbitofrontal cortex (OFC), supplementary motor area (SMA), temporal pole (TP), insula, anterior/middle cingulate cortex (ACC/MCC), thalamus, hippocampus (HIPP)/parahippocamus (ParaHIPP) and cerebellum. Correlation analysis revealed significant correlations between ReHo values in certain regions of interest (ROI) and the FD symptom severity and/or duration, including the positive correlations between the dmPFC, pACC and the symptom severity; whereas, the positive correlations between the MCC, OFC, insula, TP and FD duration. Conclusions These findings indicated that significantly distinct patterns existed between FD patients and HCs during the resting-state, which could expand our understanding of the neural basis of FD. Meanwhile, our results possibly showed potential feasibility of functional magnetic resonance imaging diagnostic assay for FD. PMID:23874543

  12. A technique for measuring vertically and horizontally polarized microwave brightness temperatures using electronic polarization-basis rotation

    NASA Technical Reports Server (NTRS)

    Gasiewski, Albin J.

    1992-01-01

    This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).

  13. In-depth investigation of enzymatic hydrolysis of biomass wastes based on three major components: Cellulose, hemicellulose and lignin.

    PubMed

    Lin, Lili; Yan, Rong; Liu, Yongqiang; Jiang, Wenju

    2010-11-01

    The artificial biomass based on three biomass components (cellulose, hemicellulose and lignin) were developed on the basis of a simplex-lattice approach. Together with a natural biomass sample, they were employed in enzymatic hydrolysis researches. Different enzyme combines of two commercial enzymes (ACCELLERASE 1500 and OPTIMASH BG) showed a potential to hydrolyze hemicellulose completely. Negligible interactions among the three components were observed, and the used enzyme ACCELLERASE 1500 was proven to be weak lignin-binding. On this basis, a multiple linear-regression equation was established for predicting the reducing sugar yield based on the component proportions in a biomass. The hemicellulose and cellulose in a biomass sample were found to have different contributions in staged hydrolysis at different time periods. Furthermore, the hydrolysis of rice straw was conducted to validate the computation approach through considerations of alkaline solution pretreatment and combined enzymes function, so as to understand better the nature of biomass hydrolysis, from the aspect of three biomass components.

  14. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  15. Comparison of Flux-Surface Aligned Curvilinear Coordinate Systems and Neoclassical Magnetic Field Predictions

    NASA Astrophysics Data System (ADS)

    Collart, T. G.; Stacey, W. M.

    2015-11-01

    Several methods are presented for extending the traditional analytic ``circular'' representation of flux-surface aligned curvilinear coordinate systems to more accurately describe equilibrium plasma geometry and magnetic fields in DIII-D. The formalism originally presented by Miller is extended to include different poloidal variations in the upper and lower hemispheres. A coordinate system based on separate Fourier expansions of major radius and vertical position greatly improves accuracy in edge plasma structure representation. Scale factors and basis vectors for a system formed by expanding the circular model minor radius can be represented using linear combinations of Fourier basis functions. A general method for coordinate system orthogonalization is presented and applied to all curvilinear models. A formalism for the magnetic field structure in these curvilinear models is presented, and the resulting magnetic field predictions are compared against calculations performed in a Cartesian system using an experimentally based EFIT prediction for the Grad-Shafranov equilibrium. Supported by: US DOE under DE-FG02-00ER54538.

  16. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  17. Electrostatic turbulence in the earth's central plasma sheet produced by multiple-ring ion distributions

    NASA Technical Reports Server (NTRS)

    Huba, J. D.; Chen, J.; Anderson, R. R.

    1992-01-01

    Attention is given to a mechanism to generate a broad spectrum of electrostatic turbulence in the quiet time central plasma sheet (CPS) plasma. It is shown theoretically that multiple-ring ion distributions can generate short-wavelength (less than about 1), electrostatic turbulence with frequencies less than about kVj, where Vj is the velocity of the jth ring. On the basis of a set of parameters from measurements made in the CPS, it is found that electrostatic turbulence can be generated with wavenumbers in the range of 0.02 and 1.0, with real frequencies in the range of 0 and 10, and with linear growth rates greater than 0.01 over a broad range of angles relative to the magnetic field (5-90 deg). These theoretical results are compared with wave data from ISEE 1 using an ion distribution function exhibiting multiple-ring structures observed at the same time. The theoretical results in the linear regime are found to be consistent with the wave data.

  18. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  19. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  20. DataView: a computational visualisation system for multidisciplinary design and analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chengen

    2016-01-01

    Rapidly processing raw data and effectively extracting underlining information from huge volumes of multivariate data become essential to all decision-making processes in sectors like finance, government, medical care, climate analysis, industries, science, etc. Remarkably, visualisation is recognised as a fundamental technology that props up human comprehension, cognition and utilisation of burgeoning amounts of heterogeneous data. This paper presents a computational visualisation system, named DataView, which has been developed for graphically displaying and capturing outcomes of multiphysics problem-solvers widely used in engineering fields. The DataView is functionally composed of techniques for table/diagram representation, and graphical illustration of scalar, vector and tensor fields. The field visualisation techniques are implemented on the basis of a range of linear and non-linear meshes, which flexibly adapts to disparate data representation schemas adopted by a variety of disciplinary problem-solvers. The visualisation system has been successfully applied to a number of engineering problems, of which some illustrations are presented to demonstrate effectiveness of the visualisation techniques.

  1. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    NASA Astrophysics Data System (ADS)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  2. 3D-MHD Simulations of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Forest, C. B.; Wright, J. C.; O'Connell, R.

    2003-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations are used to predict behavior of the experiment. The code solves the self-consistent full evolution of the magnetic and velocity fields. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James [Proc. R. Soc. Lond. A 425. 407-429 (1989)]. Initial results indicate that saturation of the magnetic field occurs so that the resulting perturbed backreaction of the induced magnetic field changes the velocity field such that it would no longer be linearly unstable, suggesting non-linear terms are necessary for explaining the resulting state. Saturation and self-excitation depend in detail upon the magnetic Prandtl number.

  3. A new "Logicle" display method avoids deceptive effects of logarithmic scaling for low signals and compensated data.

    PubMed

    Parks, David R; Roederer, Mario; Moore, Wayne A

    2006-06-01

    In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.

  4. A radial basis function Galerkin method for inhomogeneous nonlocal diffusion

    DOE PAGES

    Lehoucq, Richard B.; Rowe, Stephen T.

    2016-02-01

    We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.

  5. Fast, exact k-space sample density compensation for trajectories composed of rotationally symmetric segments, and the SNR-optimized image reconstruction from non-Cartesian samples.

    PubMed

    Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J

    2008-08-01

    A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.

  6. Reflected ray retrieval from radio occultation data using radio holographic filtering of wave fields in ray space

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Cardellach, Estel; Lauritsen, Kent B.

    2018-03-01

    Linear and non-linear representations of wave fields constitute the basis of modern algorithms for analysis of radio occultation (RO) data. Linear representations are implemented by Fourier Integral Operators, which allow for high-resolution retrieval of bending angles. Non-linear representations include Wigner Distribution Function (WDF), which equals the pseudo-density of energy in the ray space. Representations allow for filtering wave fields by suppressing some areas of the ray space and mapping the field back from the transformed space to the initial one. We apply this technique to the retrieval of reflected rays from RO observations. The use of reflected rays may increase the accuracy of the retrieval of the atmospheric refractivity. Reflected rays can be identified by the visual inspection of WDF or spectrogram plots. Numerous examples from COSMIC data indicate that reflections are mostly observed over oceans or snow, in particular over Antarctica. We introduce the reflection index that characterizes the relative intensity of the reflected ray with respect to the direct ray. The index allows for the automatic identification of events with reflections. We use the radio holographic estimate of the errors of the retrieved bending angle profiles of reflected rays. A comparison of indices evaluated for a large base of events including the visual identification of reflections indicated a good agreement with our definition of reflection index.

  7. Handy elementary algebraic properties of the geometry of entanglement

    NASA Astrophysics Data System (ADS)

    Blair, Howard A.; Alsing, Paul M.

    2013-05-01

    The space of separable states of a quantum system is a hyperbolic surface in a high dimensional linear space, which we call the separation surface, within the exponentially high dimensional linear space containing the quantum states of an n component multipartite quantum system. A vector in the linear space is representable as an n-dimensional hypermatrix with respect to bases of the component linear spaces. A vector will be on the separation surface iff every determinant of every 2-dimensional, 2-by-2 submatrix of the hypermatrix vanishes. This highly rigid constraint can be tested merely in time asymptotically proportional to d, where d is the dimension of the state space of the system due to the extreme interdependence of the 2-by-2 submatrices. The constraint on 2-by-2 determinants entails an elementary closed formformula for a parametric characterization of the entire separation surface with d-1 parameters in the char- acterization. The state of a factor of a partially separable state can be calculated in time asymptotically proportional to the dimension of the state space of the component. If all components of the system have approximately the same dimension, the time complexity of calculating a component state as a function of the parameters is asymptotically pro- portional to the time required to sort the basis. Metric-based entanglement measures of pure states are characterized in terms of the separation hypersurface.

  8. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  9. Intelligent measurement and compensation of linear motor force ripple: a projection-based learning approach in the presence of noise

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Song, Fazhi; Yang, Xiaofeng; Dong, Yue; Tan, Jiubin

    2018-06-01

    Due to their structural simplicity, linear motors are increasingly receiving attention for use in high velocity and high precision applications. The force ripple, as a space-periodic disturbance, however, would deteriorate the achievable dynamic performance. Conventional force ripple measurement approaches are time-consuming and have high requirements on the experimental conditions. In this paper, a novel learning identification algorithm is proposed for force ripple intelligent measurement and compensation. Existing identification schemes always use all the error signals to update the parameters in the force ripple. However, the error induced by noise is non-effective for force ripple identification, and even deteriorates the identification process. In this paper only the most pertinent information in the error signal is utilized for force ripple identification. Firstly, the effective error signals caused by the reference trajectory and the force ripple are extracted by projecting the overall error signals onto a subspace spanned by the physical model of the linear motor as well as the sinusoidal model of the force ripple. The time delay in the linear motor is compensated in the basis functions. Then, a data-driven approach is proposed to design the learning gain. It balances the trade-off between convergence speed and robustness against noise. Simulation and experimental results validate the proposed method and confirm its effectiveness and superiority.

  10. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  11. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  12. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  13. A Characterization of a Unified Notion of Mathematical Function: The Case of High School Function and Linear Transformation

    ERIC Educational Resources Information Center

    Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris

    2017-01-01

    As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…

  14. Introducing Linear Functions: An Alternative Statistical Approach

    ERIC Educational Resources Information Center

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  15. Magnetization Transfer Ratio Relates to Cognitive Impairment in Normal Elderly

    PubMed Central

    Seiler, Stephan; Pirpamer, Lukas; Hofer, Edith; Duering, Marco; Jouvent, Eric; Fazekas, Franz; Mangin, Jean-Francois; Chabriat, Hugues; Dichgans, Martin; Ropele, Stefan; Schmidt, Reinhold

    2014-01-01

    Magnetization transfer imaging (MTI) can detect microstructural brain tissue changes and may be helpful in determining age-related cerebral damage. We investigated the association between the magnetization transfer ratio (MTR) in gray and white matter (WM) and cognitive functioning in 355 participants of the Austrian stroke prevention family study (ASPS-Fam) aged 38–86 years. MTR maps were generated for the neocortex, deep gray matter structures, WM hyperintensities, and normal appearing WM (NAWM). Adjusted mixed models determined whole brain and lobar cortical MTR to be directly and significantly related to performance on tests of memory, executive function, and motor skills. There existed an almost linear dose-effect relationship. MTR of deep gray matter structures and NAWM correlated to executive functioning. All associations were independent of demographics, vascular risk factors, focal brain lesions, and cortex volume. Further research is needed to understand the basis of this association at the tissue level, and to determine the role of MTR in predicting cognitive decline and dementia. PMID:25309438

  16. Sweet Polymers: Poly(2-ethyl-2-oxazoline) Glycopolymers by Reductive Amination.

    PubMed

    Mees, Maarten A; Effenberg, Christiane; Appelhans, Dietmar; Hoogenboom, Richard

    2016-12-12

    Carbohydrates are important in signaling, energy storage, and metabolism. Depending on their function, carbohydrates can be part of larger structures, such as glycoproteins, glycolipids, or other functionalities (glycoside). To this end, polymers can act as carriers of carbohydrates in so-called glycopolymers, which mimic the multivalent carbohydrate functionalities. We chose a biocompatible poly(2-ethyl-2-oxazoline) (PEtOx) as the basis for making glycopolymers. Via the partial hydrolysis of PEtOx, a copolymer of PEtOx and polyethylenimine (PEI) was obtained; the subsequent reductive amination with the linear forms of glucose and maltose yielded the glycopolymers. The ratios of PEtOx and carbohydrates were varied systematically, and the solution behaviors of the resulting glycoconjugates are discussed. Dynamic light scattering (DLS) revealed that, depending on the carbohydrate ratio, the glycopolymers were either fully water-soluble or formed agglomerates in a temperature-dependent manner. Finally, these polymers were tested for their biological availability by studying their lectin binding ability with Concanavalin A.

  17. Analysis of nystagmus response to a pseudorandom velocity input

    NASA Technical Reports Server (NTRS)

    Lessard, C. S.

    1986-01-01

    Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. Space motion sickness, renamed space adaptation syndrome, occurs primarily during the initial period of a mission until habilation takes place. One of NASA's efforts to resolve the space adaptation syndrome is to model the individual's vestibular response for basis knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyse the vestibular system when subjected to a pseudorandom angular velocity input. A sum of sinusoids (pseudorandom) input lends itself to analysis by linear frequency methods. Resultant horizontal ocular movements were digitized, filtered and transformed into the frequency domain. Programs were developed and evaluated to obtain the (1) auto spectra of input stimulus and resultant ocular resonse, (2) cross spectra, (3) the estimated vestibular-ocular system transfer function gain and phase, and (4) coherence function between stimulus and response functions.

  18. Axially deformed solution of the Skyrme-Hartree-Fock-Bogoliubov equations using the transformed harmonic oscillator basis (II) HFBTHO v2.00d: A new version of the program

    NASA Astrophysics Data System (ADS)

    Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.

    2013-06-01

    We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.

  19. Rayleigh imaging in spectral mammography

    NASA Astrophysics Data System (ADS)

    Berggren, Karl; Danielsson, Mats; Fredenberg, Erik

    2016-03-01

    Spectral imaging is the acquisition of multiple images of an object at different energy spectra. In mammography, dual-energy imaging (spectral imaging with two energy levels) has been investigated for several applications, in particular material decomposition, which allows for quantitative analysis of breast composition and quantitative contrast-enhanced imaging. Material decomposition with dual-energy imaging is based on the assumption that there are two dominant photon interaction effects that determine linear attenuation: the photoelectric effect and Compton scattering. This assumption limits the number of basis materials, i.e. the number of materials that are possible to differentiate between, to two. However, Rayleigh scattering may account for more than 10% of the linear attenuation in the mammography energy range. In this work, we show that a modified version of a scanning multi-slit spectral photon-counting mammography system is able to acquire three images at different spectra and can be used for triple-energy imaging. We further show that triple-energy imaging in combination with the efficient scatter rejection of the system enables measurement of Rayleigh scattering, which adds an additional energy dependency to the linear attenuation and enables material decomposition with three basis materials. Three available basis materials have the potential to improve virtually all applications of spectral imaging.

  20. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  1. On the completeness and the linear dependence of the Cartesian multipole series in representing the solution to the Helmholtz equation.

    PubMed

    Liu, Yangfan; Bolton, J Stuart

    2016-08-01

    The (Cartesian) multipole series, i.e., the series comprising monopole, dipoles, quadrupoles, etc., can be used, as an alternative to the spherical or cylindrical wave series, in representing sound fields in a wide range of problems, such as source radiation, sound scattering, etc. The proofs of the completeness of the spherical and cylindrical wave series in these problems are classical results, and it is also generally agreed that the Cartesian multipole series spans the same space as the spherical waves: a rigorous mathematical proof of that statement has, however, not been presented. In the present work, such a proof of the completeness of the Cartesian multipole series, both in two and three dimensions, is given, and the linear dependence relations among different orders of multipoles are discussed, which then allows one to easily extract a basis from the multipole series. In particular, it is concluded that the multipoles comprising the two highest orders in the series form a basis of the whole series, since the multipoles of all the lower source orders can be expressed as a linear combination of that basis.

  2. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    NASA Technical Reports Server (NTRS)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  3. Molecular structure, vibrational spectra, NBO analysis, first hyperpolarizability, and HOMO-LUMO studies of 2-amino-4-hydroxypyrimidine by density functional method

    NASA Astrophysics Data System (ADS)

    Jeyavijayan, S.

    2015-04-01

    This study is a comparative analysis of FTIR and FT-Raman spectra of 2-amino-4-hydroxypyrimidine. The total energies of different conformations have been obtained from DFT (B3LYP) method with 6-31+G(d,p) and 6-311++G(d,p) basis sets. The barrier of planarity between the most stable and planar form is also predicted. The molecular structure, vibrational wavenumbers, infrared intensities, Raman scattering activities were calculated for the molecule using the B3LYP density functional theory (DFT) method. The computed values of frequencies are scaled using multiple scaling factors to yield good coherence with the observed values. Reliable vibrational assignments were made on the basis of total energy distribution (TED) along with scaled quantum mechanical (SQM) method. The stability of the molecule arising from hyperconjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Non-linear properties such as electric dipole moment (μ), polarizability (α), and hyperpolarizability (β) values of the investigated molecule have been computed using B3LYP quantum chemical calculation. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. Besides, molecular electrostatic potential (MEP), Mulliken's charges analysis, and several thermodynamic properties were performed by the DFT method.

  4. Linear spectral response of a Fano-resonant graded-stub filter based on pillar-photonic-crystal waveguides.

    PubMed

    Tokushima, Masatoshi

    2018-02-01

    To achieve high spectral linearity, we developed a Fano-resonant graded-stub filter on the basis of a pillar-photonic-crystal (PhC) waveguide. In a numerical simulation, the availability of a linear region within a peak-to-bottom wavelength span was nearly doubled compared to that of a sinusoidal spectrum, which was experimentally demonstrated with a fabricated silicon-pillar PhC stub filter. The high linearity of this filter is suitable for optical modulators used in multilevel amplitude modulation.

  5. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  6. Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations

    NASA Astrophysics Data System (ADS)

    Atamaniuk, Barbara; Turski, Andrzej J.

    2011-11-01

    The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.

  7. Efficient parallel architecture for highly coupled real-time linear system applications

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo

    1988-01-01

    A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.

  8. Trajectory tracking in quadrotor platform by using PD controller and LQR control approach

    NASA Astrophysics Data System (ADS)

    Islam, Maidul; Okasha, Mohamed; Idres, Moumen Mohammad

    2017-11-01

    The purpose of the paper is to discuss a comparative evaluation of performance of two different controllers i.e. Proportional-Derivative Controller (PD) and Linear Quadratic Regulation (LQR) in Quadrotor dynamic system that is under-actuated with high nonlinearity. As only four states can be controlled at the same time in the Quadrotor, the trajectories are designed on the basis of the four states whereas three dimensional position and rotation along an axis, known as yaw movement are considered. In this work, both the PD controller and LQR control approach are used for Quadrotor nonlinear model to track the trajectories. LQR control approach for nonlinear model is designed on the basis of a linear model of the Quadrotor because the performance of linear model and nonlinear model around certain nominal point is almost similar. Simulink and MATLAB software is used to design the controllers and to evaluate the performance of both the controllers.

  9. Multiple Isoforms of ANRIL in Melanoma Cells: Structural Complexity Suggests Variations in Processing.

    PubMed

    Sarkar, Debina; Oghabian, Ali; Bodiyabadu, Pasani K; Joseph, Wayne R; Leung, Euphemia Y; Finlay, Graeme J; Baguley, Bruce C; Askarian-Amiri, Marjan E

    2017-06-27

    The long non-coding RNA ANRIL , antisense to the CDKN2B locus, is transcribed from a gene that encompasses multiple disease-associated polymorphisms. Despite the identification of multiple isoforms of ANRIL , expression of certain transcripts has been found to be tissue-specific and the characterisation of ANRIL transcripts remains incomplete. Several functions have been associated with ANRIL . In our judgement, studies on ANRIL functionality are premature pending a more complete appreciation of the profusion of isoforms. We found differential expression of ANRIL exons, which indicates that multiple isoforms exist in melanoma cells. In addition to linear isoforms, we identified circular forms of ANRIL ( circANRIL ). Further characterisation of circANR IL in two patient-derived metastatic melanoma cell lines (NZM7 and NZM37) revealed the existence of a rich assortment of circular isoforms. Moreover, in the two melanoma cell lines investigated, the complements of circANRIL isoforms were almost completely different. Novel exons were also discovered. We also found the family of linear ANRIL was enriched in the nucleus, whilst the circular isoforms were enriched in the cytoplasm and they differed markedly in stability. With respect to the variable processing of circANRIL species, bioinformatic analysis indicated that intronic Arthrobacter luteus (Alu) restriction endonuclease inverted repeats and exon skipping were not involved in selection of back-spliced exon junctions. Based on our findings, we hypothesise that " ANRIL " has wholly distinct dual sets of functions in melanoma. This reveals the dynamic nature of the locus and constitutes a basis for investigating the functions of ANRIL in melanoma.

  10. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  11. Polarized photon scattering off 52Cr: Determining the parity of J =1 states

    NASA Astrophysics Data System (ADS)

    Krishichayan, Bhike, Megha; Tornow, W.; Rusev, G.; Tonchev, A. P.; Tsoneva, N.; Lenske, H.

    2015-04-01

    The photoresponse of 52Cr has been investigated in the energy range of 5.0-9.5 MeV using the photon scattering technique at the HI γ S facility of TUNL to complement previous work with unpolarized bremsstrahlung photon beams at the Darmstadt linear electron accelerator. The unambiguous parity determinations of the observed J =1 states provides the basis needed to better understand the structure of E 1 and M 1 excitations. Theoretical calculations using the quasiparticle phonon model incorporating self-consistent energy-density functional theory were performed to investigate the fragmentation pattern of the dipole strength below and around the neutron-emission threshold. These results compare very well with the experimental values.

  12. Time-dependent mean-field theory for x-ray near-edge spectroscopy

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Lee, A. J.

    2014-02-01

    We derive equations of motion for calculating the near-edge x-ray absorption spectrum in molecules and condensed matter, based on a two-determinant approximation and Dirac's variational principle. The theory provides an exact solution for the linear response when the Hamiltonian or energy functional has only diagonal interactions in some basis. We numerically solve the equations to compare with the Mahan-Nozières-De Dominicis theory of the edge singularity in metallic conductors. Our extracted power-law exponents are similar to those of the analytic theory, but are not in quantitative agreement. The calculational method can be readily generalized to treat Kohn-Sham Hamiltonians with electron-electron interactions derived from correlation-exchange potentials.

  13. New algorithms for solving high even-order differential equations using third and fourth Chebyshev-Galerkin methods

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.

    2013-03-01

    This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.

  14. Genetic Adaptation to Climate in White Spruce Involves Small to Moderate Allele Frequency Shifts in Functionally Diverse Genes.

    PubMed

    Hornoy, Benjamin; Pavy, Nathalie; Gérardi, Sébastien; Beaulieu, Jean; Bousquet, Jean

    2015-11-11

    Understanding the genetic basis of adaptation to climate is of paramount importance for preserving and managing genetic diversity in plants in a context of climate change. Yet, this objective has been addressed mainly in short-lived model species. Thus, expanding knowledge to nonmodel species with contrasting life histories, such as forest trees, appears necessary. To uncover the genetic basis of adaptation to climate in the widely distributed boreal conifer white spruce (Picea glauca), an environmental association study was conducted using 11,085 single nucleotide polymorphisms representing 7,819 genes, that is, approximately a quarter of the transcriptome.Linear and quadratic regressions controlling for isolation-by-distance, and the Random Forest algorithm, identified several dozen genes putatively under selection, among which 43 showed strongest signals along temperature and precipitation gradients. Most of them were related to temperature. Small to moderate shifts in allele frequencies were observed. Genes involved encompassed a wide variety of functions and processes, some of them being likely important for plant survival under biotic and abiotic environmental stresses according to expression data. Literature mining and sequence comparison also highlighted conserved sequences and functions with angiosperm homologs.Our results are consistent with theoretical predictions that local adaptation involves genes with small frequency shifts when selection is recent and gene flow among populations is high. Accordingly, genetic adaptation to climate in P. glauca appears to be complex, involving many independent and interacting gene functions, biochemical pathways, and processes. From an applied perspective, these results shall lead to specific functional/association studies in conifers and to the development of markers useful for the conservation of genetic resources. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  15. Genetic Adaptation to Climate in White Spruce Involves Small to Moderate Allele Frequency Shifts in Functionally Diverse Genes

    PubMed Central

    Hornoy, Benjamin; Pavy, Nathalie; Gérardi, Sébastien; Beaulieu, Jean; Bousquet, Jean

    2015-01-01

    Understanding the genetic basis of adaptation to climate is of paramount importance for preserving and managing genetic diversity in plants in a context of climate change. Yet, this objective has been addressed mainly in short-lived model species. Thus, expanding knowledge to nonmodel species with contrasting life histories, such as forest trees, appears necessary. To uncover the genetic basis of adaptation to climate in the widely distributed boreal conifer white spruce (Picea glauca), an environmental association study was conducted using 11,085 single nucleotide polymorphisms representing 7,819 genes, that is, approximately a quarter of the transcriptome. Linear and quadratic regressions controlling for isolation-by-distance, and the Random Forest algorithm, identified several dozen genes putatively under selection, among which 43 showed strongest signals along temperature and precipitation gradients. Most of them were related to temperature. Small to moderate shifts in allele frequencies were observed. Genes involved encompassed a wide variety of functions and processes, some of them being likely important for plant survival under biotic and abiotic environmental stresses according to expression data. Literature mining and sequence comparison also highlighted conserved sequences and functions with angiosperm homologs. Our results are consistent with theoretical predictions that local adaptation involves genes with small frequency shifts when selection is recent and gene flow among populations is high. Accordingly, genetic adaptation to climate in P. glauca appears to be complex, involving many independent and interacting gene functions, biochemical pathways, and processes. From an applied perspective, these results shall lead to specific functional/association studies in conifers and to the development of markers useful for the conservation of genetic resources. PMID:26560341

  16. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  17. A modified variational method for nonlinear vibration analysis of rotating beams including Coriolis effects

    NASA Astrophysics Data System (ADS)

    Tian, Jiajin; Su, Jinpeng; Zhou, Kai; Hua, Hongxing

    2018-07-01

    This paper presents a general formulation for nonlinear vibration analysis of rotating beams. A modified variational method combined with a multi-segment partitioning technique is employed to derive the free and transient vibration behaviors of the rotating beams. The strain energy and kinetic energy functional are formulated based on the order truncation principle of the fully geometrically nonlinear beam theory. The Coriolis effects as well as nonlinear effects due to the coupling of bending-stretching, bending-twist and twist-stretching are taken into account. The present method relaxes the need to explicitly meet the requirements of the boundary conditions for the admissible functions, and allows the use of any linearly independent, complete basis functions as admissible functions for rotating beams. Moreover, the method is readily used to deal with the nonlinear transient vibration problems for rotating beams subjected to dynamic loads. The accuracy, convergence and efficiency of the proposed method are examined by numerical examples. The influences of Coriolis and centrifugal forces on the vibration behaviors of the beams with various hub radiuses and slenderness ratios and rotating at different angular velocities are also investigated.

  18. The role of axis embedding on rigid rotor decomposition analysis of variational rovibrational wave functions.

    PubMed

    Szidarovszky, Tamás; Fábri, Csaba; Császár, Attila G

    2012-05-07

    Approximate rotational characterization of variational rovibrational wave functions via the rigid rotor decomposition (RRD) protocol is developed for Hamiltonians based on arbitrary sets of internal coordinates and axis embeddings. An efficient and general procedure is given that allows employing the Eckart embedding with arbitrary polyatomic Hamiltonians through a fully numerical approach. RRD tables formed by projecting rotational-vibrational wave functions into products of rigid-rotor basis functions and previously determined vibrational eigenstates yield rigid-rotor labels for rovibrational eigenstates by selecting the largest overlap. Embedding-dependent RRD analyses are performed, up to high energies and rotational excitations, for the H(2) (16)O isotopologue of the water molecule. Irrespective of the embedding chosen, the RRD procedure proves effective in providing unambiguous rotational assignments at low energies and J values. Rotational labeling of rovibrational states of H(2) (16)O proves to be increasingly difficult beyond about 10,000 cm(-1), close to the barrier to linearity of the water molecule. For medium energies and excitations the Eckart embedding yields the largest RRD coefficients, thus providing the largest number of unambiguous rotational labels.

  19. Collision detection for spacecraft proximity operations. Ph.D. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.

    1987-01-01

    The development of a new collision detection algorithm to be used when two spacecraft are operating in the same vicinity is described. The two spacecraft are modeled as unions of convex polyhedra, where the polyhedron resulting from the union may be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. The algorithm determines if a collision is possible and, if so, predicts the time when the collision will take place. The theoretical basis for the new collision detection algorithm is the C-function formulation of the configuration space approach recently introduced by researchers in robotics. Three different types of C-functions are defined that model the contacts between the vertices, edges, and faces of the polyhedra representing the two spacecraft. The C-functions are shown to be transcendental functions of time for the assumed trajectory of the moving spacecraft. The capabilities of the new algorithm are demonstrated for several example cases.

  20. SIEST-A-RT: a study of vacancy diffusion in crystalline silicon using a local-basis first-principle (SIESTA) activation technique (ART).

    NASA Astrophysics Data System (ADS)

    El Mellouhi, Fedwa; Mousseau, Normand; Ordejón, Pablo

    2003-03-01

    We report on a first-principle study of vacancy-induced self-diffusion in crystalline silicon. Our simulations are performed on supercells containing 63 and 215 atoms. We generate the diffusion paths using the activation-relaxation technique (ART) [1], which can sample efficiently the energy landscape of complex systems. The forces and energy are evaluated using SIESTA [2], a selfconsistent density functional method using standard norm-conserving pseudopotentials and a flexible numerical linear combination of atomic orbitals basis set. Combining these two methods allows us to identify diffusion paths that would not be reachable with this degree of accuracy, using other methods. After a full relaxation of the neutral vacancy, we proceed to search for local diffusion paths. We identify various mechanisms like the formation of the four fold coordinated defect, and the recombination of dangling bonds by WWW process. The diffusion of the vacancy proceeds by hops to first nearest neighbor with an energy barrier of 0.69 eV. This work is funded in part by NSERC and NATEQ. NM is a Cottrell Scholar of the Research Corporation. [1] G. T. Barkema and N. Mousseau, Event-based relaxation of continuous disordered systems, Phys. Rev. Lett. 77, 4358 (1996); N. Mousseau and G. T. Barkema, Traveling through potential energy landscapes of disordered materials: ART, Phys. Rev. E 57, 2419 (1998). [2] Density functional method for very large systems with LCAO basis sets D. Sánchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quant. Chem. 65, 453 (1997).

  1. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  2. Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys

    NASA Astrophysics Data System (ADS)

    Rizwan, M.; Afaq, A.; Aneeza, A.

    2018-05-01

    In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.

  3. Variational Dirac-Hartree-Fock calculation of the Breit interaction

    NASA Astrophysics Data System (ADS)

    Goldman, S. P.

    1988-04-01

    The calculation of the retarded version of the Breit interaction in the context of the VDHF method is discussed. With the use of Slater-type basis functions, all the terms involved can be calculated in closed form. The results are expressed as an expansion in powers of one-electron energy differences and linear combinations of hypergeometric functions. Convergence is fast and high accuracy is obtained with a small number of terms in the expansion even for high values of the nuclear charge. An added advantage is that the lowest order cancellations occurring in the retardation terms are accounted for exactly a priori. A comparison of the number of terms in the total expansion needed for an accuracy of 12 significant digits in the total energy, as well as a comparison of the results with an without retardation and in the local potential approximation, are presented for the carbon isoelectronic sequence.

  4. Bessel beam CARS of axially structured samples

    NASA Astrophysics Data System (ADS)

    Heuke, Sandro; Zheng, Juanjuan; Akimov, Denis; Heintzmann, Rainer; Schmitt, Michael; Popp, Jürgen

    2015-06-01

    We report about a Bessel beam CARS approach for axial profiling of multi-layer structures. This study presents an experimental implementation for the generation of CARS by Bessel beam excitation using only passive optical elements. Furthermore, an analytical expression is provided describing the generated anti-Stokes field by a homogeneous sample. Based on the concept of coherent transfer functions, the underling resolving power of axially structured geometries is investigated. It is found that through the non-linearity of the CARS process in combination with the folded illumination geometry continuous phase-matching is achieved starting from homogeneous samples up to spatial sample frequencies at twice of the pumping electric field wave. The experimental and analytical findings are modeled by the implementation of the Debye Integral and scalar Green function approach. Finally, the goal of reconstructing an axially layered sample is demonstrated on the basis of the numerically simulated modulus and phase of the anti-Stokes far-field radiation pattern.

  5. Bessel beam CARS of axially structured samples.

    PubMed

    Heuke, Sandro; Zheng, Juanjuan; Akimov, Denis; Heintzmann, Rainer; Schmitt, Michael; Popp, Jürgen

    2015-06-05

    We report about a Bessel beam CARS approach for axial profiling of multi-layer structures. This study presents an experimental implementation for the generation of CARS by Bessel beam excitation using only passive optical elements. Furthermore, an analytical expression is provided describing the generated anti-Stokes field by a homogeneous sample. Based on the concept of coherent transfer functions, the underling resolving power of axially structured geometries is investigated. It is found that through the non-linearity of the CARS process in combination with the folded illumination geometry continuous phase-matching is achieved starting from homogeneous samples up to spatial sample frequencies at twice of the pumping electric field wave. The experimental and analytical findings are modeled by the implementation of the Debye Integral and scalar Green function approach. Finally, the goal of reconstructing an axially layered sample is demonstrated on the basis of the numerically simulated modulus and phase of the anti-Stokes far-field radiation pattern.

  6. On the volume-dependence of the index of refraction from the viewpoint of the complex dielectric function and the Kramers-Kronig relation.

    PubMed

    Rocquefelte, Xavier; Jobic, Stéphane; Whangbo, Myung-Hwan

    2006-02-16

    How indices of refraction n(omega) of insulating solids are affected by the volume dilution of an optical entity and the mixing of different, noninteracting simple solid components was examined on the basis of the dielectric function epsilon(1)(omega) + iepsilon(2)(omega). For closely related insulating solids with an identical composition and the formula unit volume V, the relation [epsilon(1)(omega) - 1]V = constant was found by combining the relation epsilon(2)(omega)V = constant with the Kramers-Kronig relation. This relation becomes [n(2)(omega) - 1]V = constant for the index of refraction n(omega) determined for the incident light with energy less than the band gap (i.e., h omega < E(g)). For a narrow range of change in the formula unit volume, the latter relation is well approximated by a linear relation between n and 1/V.

  7. Study of vibrational modes in CuxAg1-xIn5S8 mixed crystals by infrared reflection measurements

    NASA Astrophysics Data System (ADS)

    Gasanly, N. M.

    2018-04-01

    Infrared reflection spectra of CuxAg1-xIn5S8 mixed crystals, grown by Bridgman method, were studied in the wide frequency range of 50-2000 cm-1. All four infrared-active modes were detected, which are in full agreement with the prediction of group-theoretical analysis. Real and imaginary parts of the dielectric function, refractive index and the energy losses function were evaluated from reflectivity measurements. The frequencies of TO and LO modes and oscillator strengths were also determined. The bands detected in IR spectra of studied crystals were assigned to various vibration types (valence and valence-deformation) on the basis of the symmetrized displacements of atoms obtained employing the Melvin projection operators. The linear dependencies of optical mode frequencies on the composition of CuxAg1-xIn5S8 mixed crystals were obtained. These dependencies display one-mode behavior.

  8. Natural bond orbital analysis, electronic structure, non-linear properties and vibrational spectral analysis of L-histidinium bromide monohydrate: a density functional theory.

    PubMed

    Sajan, D; Joseph, Lynnette; Vijayan, N; Karabacak, M

    2011-10-15

    The spectroscopic properties of the crystallized nonlinear optical molecule L-histidinium bromide monohydrate (abbreviated as L-HBr-mh) have been recorded and analyzed by FT-IR, FT-Raman and UV techniques. The equilibrium geometry, vibrational wavenumbers and the first order hyperpolarizability of the crystal were calculated with the help of density functional theory computations. The optimized geometric bond lengths and bond angles obtained by using DFT (B3LYP/6-311++G(d,p)) show good agreement with the experimental data. The complete assignments of fundamental vibrations were performed on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. The natural bond orbital (NBO) analysis confirms the occurrence of strong intra and intermolecular N-H⋯O hydrogen bonding. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  10. Experimental and computational study of electronic, electrochemical and thermal properties of quinoline phosphate

    NASA Astrophysics Data System (ADS)

    Ben Issa, Takoua; Ben Ali Hassine, Chedia; Ghalla, Houcine; Barhoumi, Houcine; Benhamada, Latifa

    2018-06-01

    In this work, the electronic behavior, charge transfer, non linear optical (NLO) properties, and thermal stability of the quinoline phosphate (QP) have been investigated. The experimental UV-Vis spectrum has been recorded in the range of 200-800 nm. Additionally, the absorption spectrum was reproduced by time-dependent density functional theory (TD-DFT) method with B3LYP functional and with empirical dispersion corrections D3BJ in combination with the 6-311+G(d,p) basis set. The electronic properties such as HOMO-LUMO energy gap and chemical reactivity have been calculated. The electrochemical characterization of the title compound is investigated using cyclic voltammetry and impedance spectroscopy methods. Finally, the thermal stability of the QP is discussed in term of differential scanning calorimetry (DSC) measurement, which showed that QP compound is thermally stable up to 150 °C.

  11. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative estimation. PMID:24732037

  12. Sparsest representations and approximations of an underdetermined linear system

    NASA Astrophysics Data System (ADS)

    Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier

    2018-05-01

    In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.

  13. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  14. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE PAGES

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-06-23

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  15. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  16. Feasibility of an ultra-low power digital signal processor platform as a basis for a fully implantable brain-computer interface system.

    PubMed

    Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.

  17. COMPARISONS OF THE FINITE-ELEMENT-WITH-DISCONTIGUOUS-SUPPORT METHOD TO CONTINUOUS-ENERGY MONTE CARLO FOR PIN-CELL PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. T. Till; M. Hanuš; J. Lou

    The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a material. From a finite-element viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we introduce the Finite-Element-with-Discontiguous-Support (FEDS) method, whose only approximation with respect to energy is that the angular flux is a linear combination of unknowns multiplied by basis functions. A basismore » function is non-zero only in the discontiguous set of energy intervals associated with its energy element. Discontiguous energy elements are generalizations of bands and are determined by minimizing a norm of the difference between snapshot spectra and their averages over the energy elements. We begin by presenting the theory of the FEDS method. We then compare to continuous-energy Monte Carlo for one-dimensional slab and two-dimensional pin-cell problem. We find FEDS to be accurate and efficient at producing quantities of interest such as reaction rates and eigenvalues. Results show that FEDS converges at a rate that is approximately first-order in the number of energy elements and that FEDS is less sensitive to weighting spectrum than standard MG.« less

  18. Dynamic fMRI of a decision-making task

    NASA Astrophysics Data System (ADS)

    Singh, Manbir; Sungkarat, Witaya

    2008-03-01

    A novel fMRI technique has been developed to capture the dynamics of the evolution of brain activity during complex tasks such as those designed to evaluate the neural basis of decision-making under different situations. A task called the Iowa Gambling Task was used as an example. Six normal human volunteers were studied. The task was presented inside a 3T MRI and a dynamic fMRI study of the approximately 2s period between the beginning and end of the decision-making period was conducted by employing a series of reference functions, separated by 200 ms, designed to capture activation at different time-points within this period. As decision-making culminates with a button-press, the timing of the button press was chosen as the reference (t=0) and corresponding reference functions were shifted backward in steps of 200ms from this point up to the time when motor activity from the previous button press became predominant. SPM was used to realign, high-pass filter (cutoff 200s), normalize to the Montreal Neurological Institute (MNI) Template using a 12 parameter affine/non-linear transformation, 8mm Gaussian smoothing, and event-related General Linear Model analysis for each of the shifted reference functions. The t-score of each activated voxel was then examined to find its peaking time. A random effect analysis (p<0.05) showed prefrontal, parietal and bi-lateral hippocampal activation peaking at different times during the decision making period in the n=6 group study.

  19. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2007-01-01

    In this work, we present a new set of basis functions, de ned over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also de ned over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

  20. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present a new set of basis functions, defined over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also defined over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

Top