Sample records for minimal basis set

  1. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  2. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    PubMed

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  3. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads

    2018-06-01

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  4. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  5. Polarized atomic orbitals for self-consistent field electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Head-Gordon, Martin

    1997-12-01

    We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.

  6. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  7. SH c realization of minimal model CFT: triality, poset and Burge condition

    NASA Astrophysics Data System (ADS)

    Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.

    2015-11-01

    Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.

  8. Spectral properties of minimal-basis-set orbitals: Implications for molecular electronic continuum states

    NASA Astrophysics Data System (ADS)

    Langhoff, P. W.; Winstead, C. L.

    Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.

  9. On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide

    NASA Astrophysics Data System (ADS)

    Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.

    Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.

  10. Quantum-chemical study of model chemisorption structures on copper-containing catalysts. Communicat ion 1. ab-initio calculations of CuCo and CuCo/sup +/

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuzminskii, M.B.; Bagator'yants, A.A.; Kazanskii, V.B.

    1986-08-01

    The authors perform ab-initio calculations, by the SCF MO LCAO method, of the electronic and geometric structure of the systems CuCO /SUP n+/ (n=0, 1) and potential curves of CO, depending on the charge state of the copper, with variation of all geometric parameters. The calculations of open-shell electronic states were performed by the unrestricted SCF method in a minimal basis set (I, STO-3G for the C and O, and MINI-1' for the Cu) and in a valence two-exponential basis set (II, MIDI-1 for the C and O, and MIDI'2' for the Cu). The principal results from the calculation inmore » the more flexible basis II are presented and the agreement between the results obtained in the minimal basis I and these data is then analyzed qualitatively.« less

  11. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  12. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  13. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    NASA Astrophysics Data System (ADS)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  14. Algebraic Approaches for Scalable End-to-End Monitoring and Diagnosis

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; Chen, Yan

    The rigidity of the Internet architecture led to flourish in the research of end-to-end based systems. In this chapter, we describe a linear algebra-based end-to-end monitoring and diagnosis system. We first propose a tomography-based overlay monitoring system (TOM). Given n end hosts, TOM selectively monitors a basis set of O(nlogn) paths out of all n(n - 1) end-to-end paths. Any end-to-end path can be written as a unique linear combination of paths in the basis set. Consequently, by monitoring loss rates for the paths in the basis set, TOM infers loss rates for all end-to-end paths. Furthermore, leveraging on the scalable measurements from the TOM system, we propose the Least-biased End-to-End Network Diagnosis (in short, LEND) system. We define a minimal identifiable link sequence (MILS) as a link sequence of minimal length whose properties can be uniquely identified from end-to-end measurements. LEND applies an algebraic approach to find out the MILSes and infers the properties of the MILSes efficiently. This also means LEND system achieves the finest diagnosis granularity under the least biased statistical assumptions.

  15. Basis set limit and systematic errors in local-orbital based all-electron DFT

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias

    2006-03-01

    With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).

  16. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  17. Simplified DFT methods for consistent structures and energies of large systems

    NASA Astrophysics Data System (ADS)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  18. On the Usage of Locally Dense Basis Sets in the Calculation of NMR Indirect Nuclear Spin-Spin Coupling Constants

    NASA Astrophysics Data System (ADS)

    Sanchez, Marina; Provasi, Patricio F.; Aucar, Gustavo A.; Sauer, Stephan P. A.

    Locally dense basis sets (

  19. Construction of a minimal genome as a chassis for synthetic biology.

    PubMed

    Sung, Bong Hyun; Choe, Donghui; Kim, Sun Chang; Cho, Byung-Kwan

    2016-11-30

    Microbial diversity and complexity pose challenges in understanding the voluminous genetic information produced from whole-genome sequences, bioinformatics and high-throughput '-omics' research. These challenges can be overcome by a core blueprint of a genome drawn with a minimal gene set, which is essential for life. Systems biology and large-scale gene inactivation studies have estimated the number of essential genes to be ∼300-500 in many microbial genomes. On the basis of the essential gene set information, minimal-genome strains have been generated using sophisticated genome engineering techniques, such as genome reduction and chemical genome synthesis. Current size-reduced genomes are not perfect minimal genomes, but chemically synthesized genomes have just been constructed. Some minimal genomes provide various desirable functions for bioindustry, such as improved genome stability, increased transformation efficacy and improved production of biomaterials. The minimal genome as a chassis genome for synthetic biology can be used to construct custom-designed genomes for various practical and industrial applications. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  20. Dispersion corrected hartree-fock and density functional theory for organic crystal structure prediction.

    PubMed

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-01-01

    We present and evaluate dispersion corrected Hartree-Fock (HF) and Density Functional Theory (DFT) based quantum chemical methods for organic crystal structure prediction. The necessity of correcting for missing long-range electron correlation, also known as van der Waals (vdW) interaction, is pointed out and some methodological issues such as inclusion of three-body dispersion terms are discussed. One of the most efficient and widely used methods is the semi-classical dispersion correction D3. Its applicability for the calculation of sublimation energies is investigated for the benchmark set X23 consisting of 23 small organic crystals. For PBE-D3 the mean absolute deviation (MAD) is below the estimated experimental uncertainty of 1.3 kcal/mol. For two larger π-systems, the equilibrium crystal geometry is investigated and very good agreement with experimental data is found. Since these calculations are carried out with huge plane-wave basis sets they are rather time consuming and routinely applicable only to systems with less than about 200 atoms in the unit cell. Aiming at crystal structure prediction, which involves screening of many structures, a pre-sorting with faster methods is mandatory. Small, atom-centered basis sets can speed up the computation significantly but they suffer greatly from basis set errors. We present the recently developed geometrical counterpoise correction gCP. It is a fast semi-empirical method which corrects for most of the inter- and intramolecular basis set superposition error. For HF calculations with nearly minimal basis sets, we additionally correct for short-range basis incompleteness. We combine all three terms in the HF-3c denoted scheme which performs very well for the X23 sublimation energies with an MAD of only 1.5 kcal/mol, which is close to the huge basis set DFT-D3 result.

  1. Conformational analysis of cellobiose by electronic structure theories

    USDA-ARS?s Scientific Manuscript database

    Adiabatic phi/psi maps for cellobiose were prepared with B3LYP density functional theory. A mixed basis set was used for minimization, followed with 6-31+G(d) single-point calculations, with and without SMD continuum solvation. Different arrangements of the exocyclic groups (3starting geometries) we...

  2. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  3. Useful lower limits to polarization contributions to intermolecular interactions using a minimal basis of localized orthogonal orbitals: theory and analysis of the water dimer.

    PubMed

    Azar, R Julian; Horn, Paul Richard; Sundstrom, Eric Jon; Head-Gordon, Martin

    2013-02-28

    The problem of describing the energy-lowering associated with polarization of interacting molecules is considered in the overlapping regime for self-consistent field wavefunctions. The existing approach of solving for absolutely localized molecular orbital (ALMO) coefficients that are block-diagonal in the fragments is shown based on formal grounds and practical calculations to often overestimate the strength of polarization effects. A new approach using a minimal basis of polarized orthogonal local MOs (polMOs) is developed as an alternative. The polMO basis is minimal in the sense that one polarization function is provided for each unpolarized orbital that is occupied; such an approach is exact in second-order perturbation theory. Based on formal grounds and practical calculations, the polMO approach is shown to underestimate the strength of polarization effects. In contrast to the ALMO method, however, the polMO approach yields results that are very stable to improvements in the underlying AO basis expansion. Combining the ALMO and polMO approaches allows an estimate of the range of energy-lowering due to polarization. Extensive numerical calculations on the water dimer using a large range of basis sets with Hartree-Fock theory and a variety of different density functionals illustrate the key considerations. Results are also presented for the polarization-dominated Na(+)CH4 complex. Implications for energy decomposition analysis of intermolecular interactions are discussed.

  4. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  5. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less

  6. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  7. Tables Of Gaussian-Type Orbital Basis Functions

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1992-01-01

    NASA technical memorandum contains tables of estimated Hartree-Fock wave functions for atoms lithium through neon and potassium through krypton. Sets contain optimized Gaussian-type orbital exponents and coefficients, and near Hartree-Fock quality. Orbital exponents optimized by minimizing restricted Hartree-Fock energy via scaled Newton-Raphson scheme in which Hessian evaluated numerically by use of analytically determined gradients.

  8. Extended polarization in 3rd order SCC-DFTB from chemical potential equilization

    PubMed Central

    Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus

    2012-01-01

    In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819

  9. RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.

    1980-01-01

    RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.

  10. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  11. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  12. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE PAGES

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; ...

    2017-07-21

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  13. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    PubMed Central

    Christensen, Anders S.; Elstner, Marcus; Cui, Qiang

    2015-01-01

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculated at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets. PMID:26328834

  14. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Anders S., E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Cui, Qiang, E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Elstner, Marcus

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculatedmore » at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets.« less

  15. Dynamical properties of liquid water from ab initio molecular dynamics performed in the complete basis set limit

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Seung; Tuckerman, Mark E.

    2007-04-01

    Dynamical properties of liquid water were studied using Car-Parrinello [Phys. Rev. Lett. 55, 2471 (1985)] ab initio molecular dynamics (AIMD) simulations within the Kohn-Sham (KS) density functional theory employing the Becke-Lee-Yang-Parr exchange-correlation functional for the electronic structure. The KS orbitals were expanded in a discrete variable representation basis set, wherein the complete basis set limit can be easily reached and which, therefore, provides complete convergence of ionic forces. In order to minimize possible nonergodic behavior of the simulated water system in a constant energy (NVE) ensemble, a long equilibration run (30ps) preceded a 60ps long production run. The temperature drift during the entire 60ps trajectory was found to be minimal. The diffusion coefficient [0.055Å2/ps] obtained from the present work for 32 D2O molecules is a factor of 4 smaller than the most up to date experimental value, but significantly larger than those of other recent AIMD studies. Adjusting the experimental result so as to match the finite-sized system used in the present study brings the comparison between theory and experiment to within a factor of 3. More importantly, the system is not observed to become "glassy" as has been reported in previous AIMD studies. The computed infrared spectrum is in good agreement with experimental data, especially in the low frequency regime where the translational and librational motions of water are manifested. The long simulation length also made it possible to perform detailed studies of hydrogen bond dynamics. The relaxation dynamics of hydrogen bonds observed in the present AIMD simulation is slower than those of popular force fields, such as the TIP4P potential, but comparable to that of the TIP5P potential.

  16. Dynamical properties of liquid water from ab initio molecular dynamics performed in the complete basis set limit.

    PubMed

    Lee, Hee-Seung; Tuckerman, Mark E

    2007-04-28

    Dynamical properties of liquid water were studied using Car-Parrinello [Phys. Rev. Lett. 55, 2471 (1985)] ab initio molecular dynamics (AIMD) simulations within the Kohn-Sham (KS) density functional theory employing the Becke-Lee-Yang-Parr exchange-correlation functional for the electronic structure. The KS orbitals were expanded in a discrete variable representation basis set, wherein the complete basis set limit can be easily reached and which, therefore, provides complete convergence of ionic forces. In order to minimize possible nonergodic behavior of the simulated water system in a constant energy (NVE) ensemble, a long equilibration run (30 ps) preceded a 60 ps long production run. The temperature drift during the entire 60 ps trajectory was found to be minimal. The diffusion coefficient [0.055 A2/ps] obtained from the present work for 32 D2O molecules is a factor of 4 smaller than the most up to date experimental value, but significantly larger than those of other recent AIMD studies. Adjusting the experimental result so as to match the finite-sized system used in the present study brings the comparison between theory and experiment to within a factor of 3. More importantly, the system is not observed to become "glassy" as has been reported in previous AIMD studies. The computed infrared spectrum is in good agreement with experimental data, especially in the low frequency regime where the translational and librational motions of water are manifested. The long simulation length also made it possible to perform detailed studies of hydrogen bond dynamics. The relaxation dynamics of hydrogen bonds observed in the present AIMD simulation is slower than those of popular force fields, such as the TIP4P potential, but comparable to that of the TIP5P potential.

  17. Responsible gambling: general principles and minimal requirements.

    PubMed

    Blaszczynski, Alex; Collins, Peter; Fong, Davis; Ladouceur, Robert; Nower, Lia; Shaffer, Howard J; Tavares, Hermano; Venisse, Jean-Luc

    2011-12-01

    Many international jurisdictions have introduced responsible gambling programs. These programs intend to minimize negative consequences of excessive gambling, but vary considerably in their aims, focus, and content. Many responsible gambling programs lack a conceptual framework and, in the absence of empirical data, their components are based only on general considerations and impressions. This paper outlines the consensus viewpoint of an international group of researchers suggesting fundamental responsible gambling principles, roles of key stakeholders, and minimal requirements that stakeholders can use to frame and inform responsible gambling programs across jurisdictions. Such a framework does not purport to offer value statements regarding the legal status of gambling or its expansion. Rather, it proposes gambling-related initiatives aimed at government, industry, and individuals to promote responsible gambling and consumer protection. This paper argues that there is a set of basic principles and minimal requirements that should form the basis for every responsible gambling program.

  18. Evaluations of Some Scheduling Algorithms for Hard Real-Time Systems

    DTIC Science & Technology

    1990-06-01

    construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules, the...optimal value of Tmax in the preemptive case is at least a lower bound on the optimal Tmax for the nonpreemptive schedules. This principle is the basis...23 b. Nonpreemptable Version .............................................. 24 4. The Minimize Maximum Tardiness with Earliest Start

  19. Sensor Drift Compensation Algorithm based on PDF Distance Minimization

    NASA Astrophysics Data System (ADS)

    Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo

    2009-05-01

    In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation

  20. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    NASA Astrophysics Data System (ADS)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  1. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  2. A Promising Tool to Achieve Chemical Accuracy for Density Functional Theory Calculations on Y-NO Homolysis Bond Dissociation Energies

    PubMed Central

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689

  3. Motion cues that make an impression: Predicting perceived personality by minimal motion information.

    PubMed

    Koppensteiner, Markus

    2013-11-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.

  4. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: Benchmarks approaching the complete basis set limit

    NASA Astrophysics Data System (ADS)

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-01

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  5. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: benchmarks approaching the complete basis set limit.

    PubMed

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-14

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Moller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  6. Evaluation of the vibration-rotation-tunneling dynamics at the basis set superposition error corrected global minimum geometry of the ammonia dimer

    NASA Astrophysics Data System (ADS)

    Muguet, Francis F.; Robinson, G. Wilse; Bassez-Muguet, M. Palmyre

    1995-03-01

    With the help of a new scheme to correct for the basis set superposition error (BSSE), we find that an eclipsed nonlinear geometry becomes energetically favored over the eclipsed linear hydrogen-bonded geometry. From a normal mode analysis of the potential energy surface (PES) in the vicinity of the nonlinear geometry, we suggest that several dynamical interchange pathways must be taken into account. The minimal molecular symmetry group to be considered should be the double group of G36, but still larger multiple groups may be required. An interpretation of experimental vibration-rotation-tunneling (VRT) data in terms of the G144 group, which implies monomer inversions, may not be the only alternative. It appears that group theoretical considerations alone are insufficient for understanding the complex VRT dynamics of the ammonia dimer.

  7. Renormalization, conformal ward identities and the origin of a conformal anomaly pole

    NASA Astrophysics Data System (ADS)

    Corianò, Claudio; Maglio, Matteo Maria

    2018-06-01

    We investigate the emergence of a conformal anomaly pole in conformal field theories in the case of the TJJ correlator. We show how it comes to be generated in dimensional renormalization, using a basis of 13 form factors (the F-basis), where only one of them requires renormalization (F13), extending previous studies. We then combine recent results on the structure of the non-perturbative solutions of the conformal Ward identities (CWI's) for the TJJ in momentum space, expressed in terms of a minimal set of 4 form factors (A-basis), with the properties of the F-basis, and show how the singular behaviour of the corresponding form factors in both basis can be related. The result proves the centrality of such massless effective interactions induced by the anomaly, which have recently found realization in solid state, in the theory of topological insulators and of Weyl semimetals. This pattern is confirmed in massless abelian and nonabelian theories (QED and QCD) investigated at one-loop.

  8. Approximate Dynamic Programming Algorithms for United States Air Force Officer Sustainment

    DTIC Science & Technology

    2015-03-26

    level of correction needed. While paying bonuses has an easily calculable cost, RIFs have more subtle costs. Mone (1994) discovered that in a steady...a regression is performed utilizing instrumental variables to minimize Bellman error. This algorithm uses a set of basis functions to approximate the...transitioned to an all-volunteer force. Charnes et al. (1972) utilize a goal programming model for General Schedule civilian manpower management in the

  9. A model-based reasoning approach to sensor placement for monitorability

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doyle, Richard; Homemdemello, Luiz

    1992-01-01

    An approach is presented to evaluating sensor placements to maximize monitorability of the target system while minimizing the number of sensors. The approach uses a model of the monitored system to score potential sensor placements on the basis of four monitorability criteria. The scores can then be analyzed to produce a recommended sensor set. An example from our NASA application domain is used to illustrate our model-based approach to sensor placement.

  10. Pharmacodynamics of nicotine: implications for rational treatment of nicotine addiction.

    PubMed

    Benowitz, N L

    1991-05-01

    Rational treatment of the pharmacologic aspects of tobacco addiction includes nicotine substitution therapy. Understanding the pharmacodynamics of nicotine and its role in the addiction process provides a basis for rational therapeutic intervention. Pharmacodynamic considerations are discussed in relation to the elements of smoking cessation therapy: setting objectives, selecting appropriate medication and dosing form, selecting the optimal doses and dosage regimens, assessing therapeutic outcome, and adjusting therapy to optimize benefits and minimize risks.

  11. Flap-lag-torsional dynamics of extensional and inextensional rotor blades in hover and in forward flight

    NASA Technical Reports Server (NTRS)

    Dasilva, C.

    1982-01-01

    The reduction of the O(cu epsilon) integro differential equations to ordinary differential equations using a set of orthogonal functions is described. Attention was focused on the hover flight condition. The set of Galerkin integrals that appear in the reduced equations was evaluated by making use of nonrotating beam modes. Although a large amount of computer time was needed to accomplish this task, the Galerkin integrals so evaluated were stored on tape on a permanent basis. Several of the coefficients were also obtained in closed form in order to check the accuracy of the numerical computations. The equilibrium solution to the set of 3n equations obtained was determined as the solution to a minimization problem.

  12. Complete N-point superstring disk amplitude II. Amplitude and hypergeometric function structure

    NASA Astrophysics Data System (ADS)

    Mafra, Carlos R.; Schlotterer, Oliver; Stieberger, Stephan

    2013-08-01

    Using the pure spinor formalism in part I (Mafra et al., preprint [1]) we compute the complete tree-level amplitude of N massless open strings and find a striking simple and compact form in terms of minimal building blocks: the full N-point amplitude is expressed by a sum over (N-3)! Yang-Mills partial subamplitudes each multiplying a multiple Gaussian hypergeometric function. While the former capture the space-time kinematics of the amplitude the latter encode the string effects. This result disguises a lot of structure linking aspects of gauge amplitudes as color and kinematics with properties of generalized Euler integrals. In this part II the structure of the multiple hypergeometric functions is analyzed in detail: their relations to monodromy equations, their minimal basis structure, and methods to determine their poles and transcendentality properties are proposed. Finally, a Gröbner basis analysis provides independent sets of rational functions in the Euler integrals. In contrast to [1] here we use momenta redefined by a factor of i. As a consequence the signs of the kinematic invariants are flipped, e.g. |→|.

  13. A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Attele, Rohan; Koshak, William

    2011-01-01

    A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.

  14. The metallic thread in a patchwork thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, Emily A.

    This thesis contains research that is being prepared for publication. Chapter 2 presents research on water and THF solvated macrocyclic Rh and Co compounds and the effects of different axial ligands (NO 2, NO, Cl, CH 3) on their optical activity. Chapter 3 involves the study of gas-phase Nb mono and dications with CO and CO 2. Chapter 4 is a study of reactions of CO and CO 2 with Ta mono and dications. Chapter 5 is a study on virtual orbitals, their usefulness, the use of basis sets in modeling them, and the inclusion of transition metals into themore » QUasi Atomic Minimal Basis (QUAMBO) method.68-72 Chapter 6 presents the conclusions drawn from the work presented in this dissertation.« less

  15. The Pariser-Parr-Pople model for trans-polyenes. I. Ab initio and semiempirical study of the bond alternation in trans-butadiene

    NASA Astrophysics Data System (ADS)

    Förner, Wolfgang

    1992-03-01

    Ab initio investigations of the bond alternation in butadiene are presented. The atomic basis sets applied range from minimal to split valence plus polarization quality. With the latter one the Hartree-Fock limit for the bond alternation is reached. Correlation is considered on Møller-Plesset many-body perturbation theory of second order (MP2), linear coupled cluster doubles (L-CCD) and coupled cluster doubles (CCD) level. For the smaller basis sets it is shown that for the bond alternation π-π correlations are essential while the effects of σ-σ and σ-π correlations are, though large, nearly independent of bond alternation. On MP2 level the variation of σ-π correlation with bond alternation is surprisingly large. This is discussed as an artefact of MP2. Comparative Su-Schrieffer-Heeger (SSH) and Pariser-Parr-Pople (PPP) calculations show that these models in their usual parametrizations cannot reproduce the ab initio results.

  16. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  17. Density Functional O(N) Calculations

    NASA Astrophysics Data System (ADS)

    Ordejón, Pablo

    1998-03-01

    We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.

  18. Towards quantum chemistry on a quantum computer.

    PubMed

    Lanyon, B P; Whitfield, J D; Gillett, G G; Goggin, M E; Almeida, M P; Kassal, I; Biamonte, J D; Mohseni, M; Powell, B J; Barbieri, M; Aspuru-Guzik, A; White, A G

    2010-02-01

    Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.

  19. Elementary Mode Analysis: A Useful Metabolic Pathway Analysis Tool for Characterizing Cellular Metabolism

    PubMed Central

    Trinh, Cong T.; Wlaschin, Aaron; Srienc, Friedrich

    2010-01-01

    Elementary Mode Analysis is a useful Metabolic Pathway Analysis tool to identify the structure of a metabolic network that links the cellular phenotype to the corresponding genotype. The analysis can decompose the intricate metabolic network comprised of highly interconnected reactions into uniquely organized pathways. These pathways consisting of a minimal set of enzymes that can support steady state operation of cellular metabolism represent independent cellular physiological states. Such pathway definition provides a rigorous basis to systematically characterize cellular phenotypes, metabolic network regulation, robustness, and fragility that facilitate understanding of cell physiology and implementation of metabolic engineering strategies. This mini-review aims to overview the development and application of elementary mode analysis as a metabolic pathway analysis tool in studying cell physiology and as a basis of metabolic engineering. PMID:19015845

  20. Minimization of Basis Risk in Parametric Earthquake Cat Bonds

    NASA Astrophysics Data System (ADS)

    Franco, G.

    2009-12-01

    A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the second disfavors the investor who loses part of the investment without a reasonable cause. A streamlined and fairly automated methodology has been developed to design parametric triggers that minimize the basis risk while still maintaining their level of relative simplicity. Basis risk is minimized in both, first and second generation, parametric cat bonds through an optimization procedure that aims to find the most appropriate magnitude thresholds, geographic zones, and weight index values. Sensitivity analyses to different design assumptions show that first generation cat bonds are typically affected by a large negative basis risk, namely the risk that the bond will not trigger for events within the risk level transferred, unless a sufficiently small geographic resolution is selected to define the trigger zones. Second generation cat bonds in contrast display a bias towards negative or positive basis risk depending on the degree of the polynomial used as well as on other design parameters. Two examples are presented, the construction of a first generation parametric trigger mechanism for Costa Rica and the design of a second generation parametric index for Japan.

  1. Space Environments Testbed

    NASA Technical Reports Server (NTRS)

    Leucht, David K.; Koslosky, Marie J.; Kobe, David L.; Wu, Jya-Chang C.; Vavra, David A.

    2011-01-01

    The Space Environments Testbed (SET) is a flight controller data system for the Common Carrier Assembly. The SET-1 flight software provides the command, telemetry, and experiment control to ground operators for the SET-1 mission. Modes of operation (see dia gram) include: a) Boot Mode that is initiated at application of power to the processor card, and runs memory diagnostics. It may be entered via ground command or autonomously based upon fault detection. b) Maintenance Mode that allows for limited carrier health monitoring, including power telemetry monitoring on a non-interference basis. c) Safe Mode is a predefined, minimum power safehold configuration with power to experiments removed and carrier functionality minimized. It is used to troubleshoot problems that occur during flight. d) Operations Mode is used for normal experiment carrier operations. It may be entered only via ground command from Safe Mode.

  2. Structural insights into binding of small molecule inhibitors to Enhancer of Zeste Homolog 2

    NASA Astrophysics Data System (ADS)

    Kalinić, Marko; Zloh, Mire; Erić, Slavica

    2014-11-01

    Enhancer of Zeste Homolog 2 (EZH2) is a SET domain protein lysine methyltransferase (PKMT) which has recently emerged as a chemically tractable and therapeutically promising epigenetic target, evidenced by the discovery and characterization of potent and highly selective EZH2 inhibitors. However, no experimental structures of the inhibitors co-crystallized to EZH2 have been resolved, and the structural basis for their activity and selectivity remains unknown. Considering the need to minimize cross-reactivity between prospective PKMT inhibitors, much can be learned from understanding the molecular basis for selective inhibition of EZH2. Thus, to elucidate the binding of small-molecule inhibitors to EZH2, we have developed a model of its fully-formed cofactor binding site and used it to carry out molecular dynamics simulations of protein-ligand complexes, followed by molecular mechanics/generalized born surface area calculations. The obtained results are in good agreement with biochemical inhibition data and reflect the structure-activity relationships of known ligands. Our findings suggest that the variable and flexible post-SET domain plays an important role in inhibitor binding, allowing possibly distinct binding modes of inhibitors with only small variations in their structure. Insights from this study present a good basis for design of novel and optimization of existing compounds targeting the cofactor binding site of EZH2.

  3. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    PubMed Central

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  4. NMR, MRI, and spectroscopic MRI in inhomogeneous fields

    DOEpatents

    Demas, Vasiliki; Pines, Alexander; Martin, Rachel W; Franck, John; Reimer, Jeffrey A

    2013-12-24

    A method for locally creating effectively homogeneous or "clean" magnetic field gradients (of high uniformity) for imaging (with NMR, MRI, or spectroscopic MRI) both in in-situ and ex-situ systems with high degrees of inhomogeneous field strength. THe method of imaging comprises: a) providing a functional approximation of an inhomogeneous static magnetic field strength B.sub.0({right arrow over (r)}) at a spatial position {right arrow over (r)}; b) providing a temporal functional approximation of {right arrow over (G)}.sub.shim(t) with i basis functions and j variables for each basis function, resulting in v.sub.ij variables; c) providing a measured value .OMEGA., which is an temporally accumulated dephasing due to the inhomogeneities of B.sub.0({right arrow over(r)}); and d) minimizing a difference in the local dephasing angle .phi.({right arrow over (r)},t)=.gamma..intg..sub.0.sup.t{square root over (|{right arrow over (B)}.sub.1({right arrow over (r)},t')|.sup.2+({right arrow over (r)}{right arrow over (G)}.sub.shimG.sub.shim(t')+.parallel.{right arrow over (B)}.sub.0({right arrow over (r)}).parallel..DELTA..omega.({right arrow over (r)},t'/.gamma/).sup.2)}dt'-.OMEGA. by varying the v.sub.ij variables to form a set of minimized v.sub.ij variables. The method requires calibration of the static fields prior to minimization, but may thereafter be implemented without such calibration, may be used in open or closed systems, and potentially portable systems.

  5. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  6. On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal

    NASA Astrophysics Data System (ADS)

    Fortunelli, Alessandro; Painelli, Anna

    1997-05-01

    A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.

  7. Exploring metabolic pathways in genome-scale networks via generating flux modes.

    PubMed

    Rezola, A; de Figueiredo, L F; Brock, M; Pey, J; Podhorski, A; Wittmann, C; Schuster, S; Bockmayr, A; Planes, F J

    2011-02-15

    The reconstruction of metabolic networks at the genome scale has allowed the analysis of metabolic pathways at an unprecedented level of complexity. Elementary flux modes (EFMs) are an appropriate concept for such analysis. However, their number grows in a combinatorial fashion as the size of the metabolic network increases, which renders the application of EFMs approach to large metabolic networks difficult. Novel methods are expected to deal with such complexity. In this article, we present a novel optimization-based method for determining a minimal generating set of EFMs, i.e. a convex basis. We show that a subset of elements of this convex basis can be effectively computed even in large metabolic networks. Our method was applied to examine the structure of pathways producing lysine in Escherichia coli. We obtained a more varied and informative set of pathways in comparison with existing methods. In addition, an alternative pathway to produce lysine was identified using a detour via propionyl-CoA, which shows the predictive power of our novel approach. The source code in C++ is available upon request.

  8. Estimating the intrinsic limit of the Feller-Peterson-Dixon composite approach when applied to adiabatic ionization potentials in atoms and small molecules

    NASA Astrophysics Data System (ADS)

    Feller, David

    2017-07-01

    Benchmark adiabatic ionization potentials were obtained with the Feller-Peterson-Dixon (FPD) theoretical method for a collection of 48 atoms and small molecules. In previous studies, the FPD method demonstrated an ability to predict atomization energies (heats of formation) and electron affinities well within a 95% confidence level of ±1 kcal/mol. Large 1-particle expansions involving correlation consistent basis sets (up to aug-cc-pV8Z in many cases and aug-cc-pV9Z for some atoms) were chosen for the valence CCSD(T) starting point calculations. Despite their cost, these large basis sets were chosen in order to help minimize the residual basis set truncation error and reduce dependence on approximate basis set limit extrapolation formulas. The complementary n-particle expansion included higher order CCSDT, CCSDTQ, or CCSDTQ5 (coupled cluster theory with iterative triple, quadruple, and quintuple excitations) corrections. For all of the chemical systems examined here, it was also possible to either perform explicit full configuration interaction (CI) calculations or to otherwise estimate the full CI limit. Additionally, corrections associated with core/valence correlation, scalar relativity, anharmonic zero point vibrational energies, non-adiabatic effects, and other minor factors were considered. The root mean square deviation with respect to experiment for the ionization potentials was 0.21 kcal/mol (0.009 eV). The corresponding level of agreement for molecular enthalpies of formation was 0.37 kcal/mol and for electron affinities 0.20 kcal/mol. Similar good agreement with experiment was found in the case of molecular structures and harmonic frequencies. Overall, the combination of energetic, structural, and vibrational data (655 comparisons) reflects the consistent ability of the FPD method to achieve close agreement with experiment for small molecules using the level of theory applied in this study.

  9. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  10. Indicators for the automated analysis of drug prescribing quality.

    PubMed

    Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A

    1998-01-01

    Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.

  11. Basis sets for the calculation of core-electron binding energies

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-05-01

    Core-electron binding energies (CEBEs) computed within a Δ self-consistent field approach require large basis sets to achieve convergence with respect to the basis set limit. It is shown that supplementing a basis set with basis functions from the corresponding basis set for the element with the next highest nuclear charge (Z + 1) provides basis sets that give CEBEs close to the basis set limit. This simple procedure provides relatively small basis sets that are well suited for calculations where the description of a core-ionised state is important, such as time-dependent density functional theory calculations of X-ray emission spectroscopy.

  12. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    NASA Astrophysics Data System (ADS)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papajak, Ewa; Truhlar, Donald G.

    We present sets of convergent, partially augmented basis set levels corresponding to subsets of the augmented “aug-cc-pV(n+d)Z” basis sets of Dunning and co-workers. We show that for many molecular properties a basis set fully augmented with diffuse functions is computationally expensive and almost always unnecessary. On the other hand, unaugmented cc-pV(n+d)Z basis sets are insufficient for many properties that require diffuse functions. Therefore, we propose using intermediate basis sets. We developed an efficient strategy for partial augmentation, and in this article, we test it and validate it. Sequentially deleting diffuse basis functions from the “aug” basis sets yields the “jul”,more » “jun”, “may”, “apr”, etc. basis sets. Tests of these basis sets for Møller-Plesset second-order perturbation theory (MP2) show the advantages of using these partially augmented basis sets and allow us to recommend which basis sets offer the best accuracy for a given number of basis functions for calculations on large systems. Similar truncations in the diffuse space can be performed for the aug-cc-pVxZ, aug-cc-pCVxZ, etc. basis sets.« less

  14. SCF and CI calculations of the dipole moment function of ozone. [Self-Consistent Field and Configuration-Interaction

    NASA Technical Reports Server (NTRS)

    Curtiss, L. A.; Langhoff, S. R.; Carney, G. D.

    1979-01-01

    The constant and linear terms in a Taylor series expansion of the dipole moment function of the ground state of ozone are calculated with Cartesian Gaussian basis sets ranging in quality from minimal to double zeta plus polarization. Results are presented at both the self-consistent field and configuration-interaction levels. Although the algebraic signs of the linear dipole moment derivatives are all established to be positive, the absolute magnitudes of these quantities, as well as the infrared intensities calculated from them, vary considerably with the level of theory.

  15. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  16. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  17. Raman spectra of thiolated arsenicals with biological importance.

    PubMed

    Yang, Mingwei; Sun, Yuzhen; Zhang, Xiaobin; McCord, Bruce; McGoron, Anthony J; Mebel, Alexander; Cai, Yong

    2018-03-01

    Surface enhanced Raman scattering (SERS) has great potential as an alternative tool for arsenic speciation in biological matrices. SERS measurements have advantages over other techniques due to its ability to maintain the integrity of arsenic species and its minimal requirements for sample preparation. Up to now, very few Raman spectra of arsenic compounds have been reported. This is particularly true for thiolated arsenicals, which have recently been found to be widely present in humans. The lack of data for Raman spectra in arsenic speciation hampers the development of new tools using SERS. Herein, we report the results of a study combining the analysis of experimental Raman spectra with that obtained from density functional calculations for some important arsenic metabolites. The results were obtained with a hybrid functional B3LYP approach using different basis sets to calculate Raman spectra of the selected arsenicals. By comparing experimental and calculated spectra of dimethylarsinic acid (DMA V ), the basis set 6-311++G** was found to provide computational efficiency and precision in vibrational frequency prediction. The Raman frequencies for the rest of organoarsenicals were studied using this basis set, including monomethylarsonous acid (MMA III ), dimethylarsinous acid (DMA III ), dimethylmonothioarinic acid (DMMTA V ), dimethyldithioarsinic acid (DMDTA V ), S-(Dimethylarsenic) cysteine (DMA III (Cys)) and dimethylarsinous glutathione (DMA III GS). The results were compared with fingerprint Raman frequencies from As─O, As─C, and As─S obtained under different chemical environments. These fingerprint vibrational frequencies should prove useful in future measurements of different species of arsenic using SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Problem of quantifying quantum correlations with non-commutative discord

    NASA Astrophysics Data System (ADS)

    Majtey, A. P.; Bussandri, D. G.; Osán, T. M.; Lamberti, P. W.; Valdés-Hernández, A.

    2017-09-01

    In this work we analyze a non-commutativity measure of quantum correlations recently proposed by Guo (Sci Rep 6:25241, 2016). By resorting to a systematic survey of a two-qubit system, we detected an undesirable behavior of such a measure related to its representation-dependence. In the case of pure states, this dependence manifests as a non-satisfactory entanglement measure whenever a representation other than the Schmidt's is used. In order to avoid this basis-dependence feature, we argue that a minimization procedure over the set of all possible representations of the quantum state is required. In the case of pure states, this minimization can be analytically performed and the optimal basis turns out to be that of Schmidt's. In addition, the resulting measure inherits the main properties of Guo's measure and, unlike the latter, it reduces to a legitimate entanglement measure in the case of pure states. Some examples involving general mixed states are also analyzed considering such an optimization. The results show that, in most cases of interest, the use of Guo's measure can result in an overestimation of quantum correlations. However, since Guo's measure has the advantage of being easily computable, it might be used as a qualitative estimator of the presence of quantum correlations.

  19. Using the critical incident technique to define a minimal data set for requirements elicitation in public health.

    PubMed

    Olvingson, Christina; Hallberg, Niklas; Timpka, Toomas; Greenes, Robert A

    2002-12-18

    The introduction of computer-based information systems (ISs) in public health provides enhanced possibilities for service improvements and hence also for improvement of the population's health. Not least, new communication systems can help in the socialization and integration process needed between the different professions and geographical regions. Therefore, development of ISs that truly support public health practices require that technical, cognitive, and social issues be taken into consideration. A notable problem is to capture 'voices' of all potential users, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this study is to develop a minimal data set for capturing users' voices on problems experienced by public health professionals in their daily work and opinions about how these problems can be solved. The issues of concern thus captured can be used both as the basis for formulating the requirements of ISs for public health professionals and to create an understanding of the use context. Further, the data can help in directing the design to the features most important for the users.

  20. Hospital protocols for targeted glycemic control: Development, implementation, and models for cost justification.

    PubMed

    Magee, Michelle F

    2007-05-15

    Evolving elements of best practices for providing targeted glycemic control in the hospital setting, clinical performance measurement, basal-bolus plus correction-dose insulin regimens, components of standardized subcutaneous (s.c.) insulin order sets, and strategies for implementation and cost justification of glycemic control initiatives are discussed. Best practices for targeted glycemic control should address accurate documentation of hyperglycemia, initial patient assessment, management plan, target blood glucose range, blood glucose monitoring frequency, maintenance of glycemic control, criteria for glucose management consultations, and standardized insulin order sets and protocols. Establishing clinical performance measures, including desirable processes and outcomes, can help ensure the success of targeted hospital glycemic control initiatives. The basal-bolus plus correction-dose regimen for insulin administration will be used to mimic the normal physiologic pattern of endogenous insulin secretion. Standardized insulin order sets and protocols are being used to minimize the risk of error in insulin therapy. Components of standardized s.c. insulin order sets include specification of the hyperglycemia diagnosis, finger stick blood glucose monitoring frequency and timing, target blood glucose concentration range, cutoff values for excessively high or low blood glucose concentrations that warrant alerting the physician, basal and prandial or nutritional (i.e., bolus) insulin, correction doses, hypoglycemia treatment, and perioperative or procedural dosage adjustments. The endorsement of hospital administrators and key physician and nursing leaders is needed for glycemic control initiatives. Initiatives may be cost justified on the basis of the billings for clinical diabetes management services and/or the return- on-investment accrued to reductions in hospital length of stay, readmissions, and accurate documentation and coding of unrecognized or uncontrolled diabetes, and diabetes complications. Standardized insulin order sets and protocols may minimize risk of insulin errors. The endorsement of these protocols by administrators, physicians, nurses, and pharmacists is also needed for success.

  1. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  2. Evolutionary profiles derived from the QR factorization of multiple structural alignments gives an economy of information.

    PubMed

    O'Donoghue, Patrick; Luthey-Schulten, Zaida

    2005-02-25

    We present a new algorithm, based on the multidimensional QR factorization, to remove redundancy from a multiple structural alignment by choosing representative protein structures that best preserve the phylogenetic tree topology of the homologous group. The classical QR factorization with pivoting, developed as a fast numerical solution to eigenvalue and linear least-squares problems of the form Ax=b, was designed to re-order the columns of A by increasing linear dependence. Removing the most linear dependent columns from A leads to the formation of a minimal basis set which well spans the phase space of the problem at hand. By recasting the problem of redundancy in multiple structural alignments into this framework, in which the matrix A now describes the multiple alignment, we adapted the QR factorization to produce a minimal basis set of protein structures which best spans the evolutionary (phase) space. The non-redundant and representative profiles obtained from this procedure, termed evolutionary profiles, are shown in initial results to outperform well-tested profiles in homology detection searches over a large sequence database. A measure of structural similarity between homologous proteins, Q(H), is presented. By properly accounting for the effect and presence of gaps, a phylogenetic tree computed using this metric is shown to be congruent with the maximum-likelihood sequence-based phylogeny. The results indicate that evolutionary information is indeed recoverable from the comparative analysis of protein structure alone. Applications of the QR ordering and this structural similarity metric to analyze the evolution of structure among key, universally distributed proteins involved in translation, and to the selection of representatives from an ensemble of NMR structures are also discussed.

  3. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  4. On the strong metric dimension of generalized butterfly graph, starbarbell graph, and {C}_{m}\\odot {P}_{n} graph

    NASA Astrophysics Data System (ADS)

    Yunia Mayasari, Ratih; Atmojo Kusmayadi, Tri

    2018-04-01

    Let G be a connected graph with vertex set V(G) and edge set E(G). For every pair of vertices u,v\\in V(G), the interval I[u, v] between u and v to be the collection of all vertices that belong to some shortest u ‑ v path. A vertex s\\in V(G) strongly resolves two vertices u and v if u belongs to a shortest v ‑ s path or v belongs to a shortest u ‑ s path. A vertex set S of G is a strong resolving set of G if every two distinct vertices of G are strongly resolved by some vertex of S. The strong metric basis of G is a strong resolving set with minimal cardinality. The strong metric dimension sdim(G) of a graph G is defined as the cardinality of strong metric basis. In this paper we determine the strong metric dimension of a generalized butterfly graph, starbarbell graph, and {C}mȯ {P}n graph. We obtain the strong metric dimension of generalized butterfly graph is sdim(BFn ) = 2n ‑ 2. The strong metric dimension of starbarbell graph is sdim(S{B}{m1,{m}2,\\ldots,{m}n})={\\sum }i=1n({m}i-1)-1. The strong metric dimension of {C}mȯ {P}n graph are sdim({C}mȯ {P}n)=2m-1 for m > 3 and n = 2, and sdim({C}mȯ {P}n)=2m-2 for m > 3 and n > 2.

  5. Large-scale quantum transport calculations for electronic devices with over ten thousand atoms

    NASA Astrophysics Data System (ADS)

    Lu, Wenchang; Lu, Yan; Xiao, Zhongcan; Hodak, Miro; Briggs, Emil; Bernholc, Jerry

    The non-equilibrium Green's function method (NEGF) has been implemented in our massively parallel DFT software, the real space multigrid (RMG) code suite. Our implementation employs multi-level parallelization strategies and fully utilizes both multi-core CPUs and GPU accelerators. Since the cost of the calculations increases dramatically with the number of orbitals, an optimal basis set is crucial for including a large number of atoms in the ``active device'' part of the simulations. In our implementation, the localized orbitals are separately optimized for each principal layer of the device region, in order to obtain an accurate and optimal basis set. As a large example, we calculated the transmission characteristics of a Si nanowire p-n junction. The nanowire is along (110) direction in order to minimize the number dangling bonds that are saturated by H atoms. Its diameter is 3 nm. The length of 24 nm is necessary because of the long-range screening length in Si. Our calculations clearly show the I-V characteristics of a diode, i.e., the current increases exponentially with forward bias and is near zero with backward bias. Other examples will also be presented, including three-terminal transistors and large sensor structures.

  6. The leap-frog effect of ring currents in benzene.

    PubMed

    Ligabue, Andrea; Soncini, Alessandro; Lazzeretti, Paolo

    2002-03-06

    Symmetry arguments show that the ring-current model proposed by Pauling, Lonsdale, and London to explain the enhanced diamagnetism of benzene is flawed by an intrinsic drawback. The minimal basis set of six atomic 2p orbitals taken into account to develop such a model is inherently insufficient to predict a paramagnetic contribution to the perpendicular component of magnetic susceptibility in planar ring systems such as benzene. Analogous considerations can be made for the hypothetical H(6) cyclic molecule. A model allowing for extended basis sets is necessary to rationalize the magnetism of aromatics. According to high-quality coupled Hartree-Fock calculations, the trajectories of the current density vector field induced by a magnetic field perpendicular to the skeletal plane of benzene in the pi electrons are noticeably different from those typical of a Larmor diamagnetic circulation, in that (i) significant deformation of the orbits from circular to hexagonal symmetry occurs, which is responsible for a paramagnetic contribution of pi electrons to the out-of-plane component of susceptibility, and (ii) a sizable component of the pi current density vector parallel to the inducing field is predicted. This causes a waving motion of pi electrons; streamlines are characterized by a "leap-frog effect".

  7. The analysis of factors of management of safety of critical information infrastructure with use of dynamic models

    NASA Astrophysics Data System (ADS)

    Trostyansky, S. N.; Kalach, A. V.; Lavlinsky, V. V.; Lankin, O. V.

    2018-03-01

    Based on the analysis of the dynamic model of panel data by region, including fire statistics for surveillance sites and statistics of a set of regional socio-economic indicators, as well as the time of rapid response of the state fire service to fires, the probability of fires in the surveillance sites and the risk of human death in The result of such fires from the values of the corresponding indicators for the previous year, a set of regional social-economics factors, as well as regional indicators time rapid response of the state fire service in the fire. The results obtained are consistent with the results of the application to the fire risks of the model of a rational offender. Estimation of the economic equivalent of human life from data on surveillance objects for Russia, calculated on the basis of the analysis of the presented dynamic model of fire risks, correctly agrees with the known literary data. The results obtained on the basis of the econometric approach to fire risks allow us to forecast fire risks at the supervisory sites in the regions of Russia and to develop management solutions to minimize such risks.

  8. Correlation consistent basis sets for the atoms In–Xe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahler, Andrew; Wilson, Angela K., E-mail: akwilson@unt.edu

    In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.

  9. Optimal resource states for local state discrimination

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael

    2018-02-01

    We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.

  10. Application of artificial neural networks to chemostratigraphy

    NASA Astrophysics Data System (ADS)

    Malmgren, BjöRn A.; Nordlund, Ulf

    1996-08-01

    Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.

  11. An electroweak basis for neutrinoless double β decay

    NASA Astrophysics Data System (ADS)

    Graesser, Michael L.

    2017-08-01

    A discovery of neutrinoless double- β decay would be profound, providing the first direct experimental evidence of Δ L = 2 lepton number violating processes. While a natural explanation is provided by an effective Majorana neutrino mass, other new physics interpretations should be carefully evaluated. At low-energies such new physics could man-ifest itself in the form of color and SU(2) L × U(1) Y invariant higher dimension operators. Here we determine a complete set of electroweak invariant dimension-9 operators, and our analysis supersedes those that only impose U(1) em invariance. Imposing electroweak invariance implies: 1) a significantly reduced set of leading order operators compared to only imposing U(1) em invariance; and 2) other collider signatures. Prior to imposing electroweak invariance we find a minimal basis of 24 dimension-9 operators, which is reduced to 11 electroweak invariant operators at leading order in the expansion in the Higgs vacuum expectation value. We set up a systematic analysis of the hadronic realization of the 4-quark operators using chiral perturbation theory, and apply it to determine which of these operators have long-distance pion enhancements at leading order in the chiral expansion. We also find at dimension-11 and dimension-13 the electroweak invariant operators that after electroweak symmetry breaking produce the remaining Δ L = 2 operators that would appear at dimension-9 if only U(1) em is imposed.

  12. Push it to the limit: Characterizing the convergence of common sequences of basis sets for intermolecular interactions as described by density functional theory

    NASA Astrophysics Data System (ADS)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2016-05-01

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.

  13. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    NASA Astrophysics Data System (ADS)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  14. Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.

    2017-11-01

    Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.

  15. Current trends in treatment of hypertension in Karachi and cost minimization possibilities.

    PubMed

    Hussain, Izhar M; Naqvi, Baqir S; Qasim, Rao M; Ali, Nasir

    2015-01-01

    This study finds out drug usage trends in Stage I Hypertensive Patients without any compelling indications in Karachi, deviations of current practices from evidence based antihypertensive therapeutic guidelines and looks for cost minimization opportunities. In the present study conducted during June 2012 to August 2012, two sets were used. Randomized stratified independent surveys were conducted in doctors and general population - including patients, using pretested questionnaires. Sample sizes for doctors and general population were 100 and 400 respectively. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Financial impact was also analyzed. On the basis of patients' doctors' feedback, Beta Blockers, and Angiotensin Converting Enzyme Inhibitors were used more frequently than other drugs. Thiazides and low-priced generics were hardly prescribed. Beta blockers were prescribed widely and considered cost effective. This trend increases cost by two to ten times. Feedbacks showed that therapeutic guidelines were not followed by the doctors practicing in the community and hospitals in Karachi. Thiazide diuretics were hardly used. Beta blockers were widely prescribed. High priced market leaders or expensive branded generics were commonly prescribed. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less

  17. Solution Concepts for Distributed Decision-Making without Coordination

    NASA Technical Reports Server (NTRS)

    Beling, Peter A.; Patek, Stephen D.

    2005-01-01

    Consider a single-stage problem in which we have a group N agents who are attempting to minimize the expected cost of their joint actions, without the benefit of communication or a pre-established protocol but with complete knowledge of the expected cost of any joint set of actions for the group. We call this situation a static coordination problem. The central issue in defining an appropriate solution concept for static coordination problems is considering how to deal with the fact that if the agents axe faced with a set of multiple (mixed) strategies that are equally attractive in terms of cost, a failure of coordination may lead to an expected cost value that is worse than that of any of the strategies in the set. In this proposal, we describe the notion of a general coordination problem, describe initial efforts at developing a solution concept for static coordination problems, and then outline a research agenda that centers on activities that will be basis for obtaining a complete understanding of solutions to static coordination problems.

  18. Towards topological quantum computer

    NASA Astrophysics Data System (ADS)

    Melnikov, D.; Mironov, A.; Mironov, S.; Morozov, A.; Morozov, An.

    2018-01-01

    Quantum R-matrices, the entangling deformations of non-entangling (classical) permutations, provide a distinguished basis in the space of unitary evolutions and, consequently, a natural choice for a minimal set of basic operations (universal gates) for quantum computation. Yet they play a special role in group theory, integrable systems and modern theory of non-perturbative calculations in quantum field and string theory. Despite recent developments in those fields the idea of topological quantum computing and use of R-matrices, in particular, practically reduce to reinterpretation of standard sets of quantum gates, and subsequently algorithms, in terms of available topological ones. In this paper we summarize a modern view on quantum R-matrix calculus and propose to look at the R-matrices acting in the space of irreducible representations, which are unitary for the real-valued couplings in Chern-Simons theory, as the fundamental set of universal gates for topological quantum computer. Such an approach calls for a more thorough investigation of the relation between topological invariants of knots and quantum algorithms.

  19. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook

    2015-03-07

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less

  20. Habitat and environment of islands: primary and supplemental island sets

    USGS Publications Warehouse

    Matalas, Nicholas C.; Grossling, Bernardo F.

    2002-01-01

    The original intent of the study was to develop a first-order synopsis of island hydrology with an integrated geologic basis on a global scale. As the study progressed, the aim was broadened to provide a framework for subsequent assessments on large regional or global scales of island resources and impacts on those resources that are derived from global changes. Fundamental to the study was the development of a comprehensive framework?a wide range of parameters that describe a set of 'saltwater' islands sufficiently large to Characterize the spatial distribution of the world?s islands; Account for all major archipelagos; Account for almost all oceanically isolated islands, and Account collectively for a very large proportion of the total area of the world?s islands whereby additional islands would only marginally contribute to the representativeness and accountability of the island set. The comprehensive framework, which is referred to as the ?Primary Island Set,? is built on 122 parameters that describe 1,000 islands. To complement the investigations based on the Primary Island Set, two supplemental island sets, Set A?Other Islands (not in the Primary Island Set) and Set B?Lagoonal Atolls, are included in the study. The Primary Island Set, together with the Supplemental Island Sets A and B, provides a framework that can be used in various scientific disciplines for their island-based studies on broad regional or global scales. The study uses an informal, coherent, geophysical organization of the islands that belong to the three island sets. The organization is in the form of a global island chain, which is a particular sequential ordering of the islands referred to as the 'Alisida.' The Alisida was developed through a trial-and-error procedure by seeking to strike a balance between 'minimizing the length of the global chain' and 'maximizing the chain?s geophysical coherence.' The fact that an objective function cannot be minimized and maximized simultaneously indicates that the Alisida is not unique. Global island chains other than the Alisida may better serve disciplines other than those of hydrology and geology.

  1. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  2. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  3. Accurate Methods for Large Molecular Systems (Preprint)

    DTIC Science & Technology

    2009-01-06

    tensor, EFP calculations are basis set dependent. The smallest recommended basis set is 6- 31++G( d , p )52 The dependence of the computational cost of...and second order perturbation theory (MP2) levels with the 6-31G( d , p ) basis set. Additional SFM tests are presented for a small set of alpha...helices using the 6-31++G( d , p ) basis set. The larger 6-311++G(3df,2p) basis set is employed for creating all EFPs used for non- bonded interactions, since

  4. Need for reaction coordinates to ensure a complete basis set in an adiabatic representation of ion-atom collisions

    NASA Astrophysics Data System (ADS)

    Rabli, Djamal; McCarroll, Ronald

    2018-02-01

    This review surveys the different theoretical approaches, used to describe inelastic and rearrangement processes in collisions involving atoms and ions. For a range of energies from a few meV up to about 1 keV, the adiabatic representation is expected to be valid and under these conditions, inelastic and rearrangement processes take place via a network of avoided crossings of the potential energy curves of the collision system. In general, such avoided crossings are finite in number. The non-adiabatic coupling, due to the breakdown of the Born-Oppenheimer separation of the electronic and nuclear variables, depends on the ratio of the electron mass to the nuclear mass terms in the total Hamiltonian. By limiting terms in the total Hamiltonian correct to first order in the electron to nuclear mass ratio, a system of reaction coordinates is found which allows for a correct description of both inelastic channels. The connection between the use of reaction coordinates in the quantum description and the electron translation factors of the impact parameter approach is established. A major result is that only when reaction coordinates are used, is it possible to introduce the notion of a minimal basis set. Such a set must include all avoided crossings including both radial coupling and long range Coriolis coupling. But, only when reactive coordinates are used, can such a basis set be considered as complete. In particular when the centre of nuclear mass is used as centre of coordinates, rather than the correct reaction coordinates, it is shown that erroneous results are obtained. A few results to illustrate this important point are presented: one concerning a simple two-state Landau-Zener type avoided crossing, the other concerning a network of multiple crossings in a typical electron capture process involving a highly charged ion with a neutral atom.

  5. Measurement of tracer gas distributions using an open-path FTIR system coupled with computed tomography

    NASA Astrophysics Data System (ADS)

    Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.

    1995-05-01

    Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.

  6. Quality criteria for medical device registries: best practice approaches for improving patient safety - a systematic review of international experiences.

    PubMed

    Niederländer, Charlotte Susanne; Kriza, Christine; Kolominsky-Rabas, Peter

    2017-01-01

    As the benefit of medical device registries (MDRs) depends on their content and quality, it is important to ensure that MDRs have a robust and adequate structure to fulfill their objectives. However, no requirements are specified for the design and content of MDRs. The aim of this work is to analyze different MDRs in the field of implants and to give best practice recommendations for quality criteria regarding their design and development. Areas covered: A systematic literature search performed in databases (Medline, Cochrane Library, Scopus, Embase, CRD York), selected journals and websites identified 66 articles describing either a general MDR structure or the development process of specific registries. Extracted information about MDRs served as the basis for recommendations: MDRs should deliver a minimal data set and report information about the geographical area, data collection, numbers of patients enrolled, registry staff, and security and confidentiality of data. Expert commentary: Well-structured registries are a cornerstone of the regulatory process of medical devices and a major tool for decision makers. A future goal is to establish agreed minimal data sets for different devices - overcoming national borders. By establishing clear guidelines, the outcomes as well as registry comparability can be fundamentally improved.

  7. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    PubMed

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  8. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  9. Global 21 cm Signal Extraction from Foreground and Instrumental Effects. I. Pattern Recognition Framework for Separation Using Training Sets

    NASA Astrophysics Data System (ADS)

    Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric

    2018-02-01

    The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.

  10. Optimized auxiliary basis sets for density fitted post-Hartree-Fock calculations of lanthanide containing molecules

    NASA Astrophysics Data System (ADS)

    Chmela, Jiří; Harding, Michael E.

    2018-06-01

    Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.

  11. Antigenic Competition Between and Endotoxic Adjuvant and a Protein Antigen

    PubMed Central

    Leong, Daniel L. Y.; Rudbach, Jon A.

    1971-01-01

    Antigenic competition between bovine gamma globulin (BGG) and endotoxin from a smooth strain (S-ET) and a rough (R-ET) heptoseless mutant strain of Salmonella minnesota was studied in mice. Both endotoxins acted as adjuvants for enhancing the antibody response to BGG. However, other work showed that the R-ET had minimal antigenicity, and it was used as a control for the competition studies. Antigenic competition between BGG and endotoxin as expressed by a suppression of the antibody response to BGG could not be demonstrated when varying adjuvant doses of S-ET or R-ET were injected simultaneously with a small constant dose of BGG into normal mice. However, mice presensitized with S-ET several weeks before immunization with the S-ET and BGG combination produced anti-BGG levels which were four to eightfold lower than in normal mice. Nearly complete suppression of the anti-BGG response could be obtained in presensitized mice by reducing the BGG dose 10-fold or by increasing the adjuvant dose of endotoxin. Mice pretreated with R-ET and challenged with BGG plus S-ET or R-ET showed no depression of the anti-BGG response. These and other experiments confirmed the immunological basis of the competitive effect. PMID:16557970

  12. How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?

    PubMed

    Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J

    2016-04-05

    Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.

  13. An autonomous payload controller for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Hudgins, J. I.

    1979-01-01

    The Autonomous Payload Control (APC) system discussed in the present paper was designed on the basis of such criteria as minimal cost of implementation, minimal space required in the flight-deck area, simple operation with verification of the results, minimal additional weight, minimal impact on Orbiter design, and minimal impact on Orbiter payload integration. In its present configuration, the APC provides a means for the Orbiter crew to control as many as 31 autononous payloads. The avionics and human engineering aspects of the system are discussed.

  14. Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.

    PubMed

    Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland

    2009-06-09

    The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.

  15. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Tuomas P., E-mail: tuomas.rossi@alumni.aalto.fi; Sakko, Arto; Puska, Martti J.

    We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate thatmore » the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.« less

  18. Motion cues that make an impression☆

    PubMed Central

    Koppensteiner, Markus

    2013-01-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432

  19. Neuropharmacology of the essential oil of bergamot.

    PubMed

    Bagetta, Giacinto; Morrone, Luigi Antonio; Rombolà, Laura; Amantea, Diana; Russo, Rossella; Berliocchi, Laura; Sakurada, Shinobu; Sakurada, Tsukasa; Rotiroti, Domenicantonio; Corasaniti, Maria Tiziana

    2010-09-01

    Bergamot (Citrus bergamia, Risso) is a fruit most knowledgeable for its essential oil (BEO) used in aromatherapy to minimize symptoms of stress-induced anxiety and mild mood disorders and cancer pain though the rational basis for such applications awaits to be discovered. The behavioural and EEG spectrum power effects of BEO correlate well with its exocytotic and carrier-mediated release of discrete amino acids endowed with neurotransmitter function in the mammalian hippocampus supporting the deduction that BEO is able to interfere with normal and pathological synaptic plasticity. The observed neuroprotection in the course of experimental brain ischemia and pain does support this view. In conclusion, the data yielded so far contribute to our understanding of the mode of action of this phytocomplex on nerve tissue under normal and pathological experimental conditions and provide a rational basis for the practical use of BEO in complementary medicine. The opening of a wide venue for future research and translation into clinical settings is also envisaged. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. Investigation of antimicrobial activities, DNA interaction, structural and spectroscopic properties of 2-chloro-6-(trifluoromethyl)pyridine

    NASA Astrophysics Data System (ADS)

    Evecen, Meryem; Kara, Mehmet; Idil, Onder; Tanak, Hasan

    2017-06-01

    2-Chloro-6-(trifluoromethyl)pyridine has been characterized by FT-IR, 1H and 13C NMR experiment. FT-IR spectra of the molecule has been recorded in the 4000-400 cm-1 region. The molecular structural parameters and vibrational frequencies were computed using the HF and DFT (B3LYP, B3PW91) methods with the 6-31+G(d,p) and 6-311++G(d,p) basis sets. 1H and 13C NMR Gauge Including Atomic Orbital (GIAO) chemical shifts of the compound were calculated using the density functional method (B3LYP) with the 6-311++G(d,p) basis set. The vibrational wavenumbers and chemical shifts were compared with the experimental data of the compound. Using the TD-DFT methodology, electronic absorption spectra of the compound have been computed. Besides, solvent effects on the excitation energies and chemical shifts were carried out using the integral equation formalism of the polarisable continuum model (IEF-PCM). DFT calculations of the compound, Mulliken's charges, molecular electrostatic potential (MEP), natural bond orbital (NBO) and thermodynamic properties were also obtained theoretically. In addition, the antimicrobial activities were tested by using minimal inhibitory concentration method (MIC) and also the effect of the molecule on pBR322 plasmid DNA was monitored byagarose gel electrophoresis experiments.

  1. Toward the International Classification of Functioning, Disability and Health (ICF) Rehabilitation Set: A Minimal Generic Set of Domains for Rehabilitation as a Health Strategy.

    PubMed

    Prodinger, Birgit; Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Üstün, Tevfik Bedirhan; Chatterji, Somnath; Stucki, Gerold

    2016-06-01

    To develop a comprehensive set of the International Classification of Functioning, Disability and Health (ICF) categories as a minimal standard for reporting and assessing functioning and disability in clinical populations along the continuum of care. The specific aims were to specify the domains of functioning recommended for an ICF Rehabilitation Set and to identify a minimal set of environmental factors (EFs) to be used alongside the ICF Rehabilitation Set when describing disability across individuals and populations with various health conditions. Secondary analysis of existing data sets using regression methods (Random Forests and Group Lasso regression) and expert consultations. Along the continuum of care, including acute, early postacute, and long-term and community rehabilitation settings. Persons (N=9863) with various health conditions participated in primary studies. The number of respondents for whom the dependent variable data were available and used in this analysis was 9264. Not applicable. For regression analyses, self-reported general health was used as a dependent variable. The ICF categories from the functioning component and the EF component were used as independent variables for the development of the ICF Rehabilitation Set and the minimal set of EFs, respectively. Thirty ICF categories to be complemented with 12 EFs were identified as relevant to the identified ICF sets. The ICF Rehabilitation Set constitutes of 9 ICF categories from the component body functions and 21 from the component activities and participation. The minimal set of EFs contains 12 categories spanning all chapters of the EF component of the ICF. The identified sets proposed serve as minimal generic sets of aspects of functioning in clinical populations for reporting data within and across heath conditions, time, clinical settings including rehabilitation, and countries. These sets present a reference framework for harmonizing existing information on disability across general and clinical populations. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  3. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  4. A structured policy review of the principles of professional self-regulation.

    PubMed

    Benton, D C; González-Jurado, M A; Beneit-Montesinos, J V

    2013-03-01

    The International Council of Nurses (ICN) has, for many years, based its work on professional self-regulation on a set of 12 principles. These principles are research based and were identified nearly three decades ago. ICN has conducted a number of reviews of the principles; however, changes have been minimal. In the past 5-10 years, a number of authors and governments, often as part of the review of regulatory systems, have started to propose principles to guide the way regulatory frameworks are designed and implemented. These principles vary in number and content. This study examines the current policy literature on principle-based regulation and compares this with the set of principles advocated by the ICN. A systematic search of the literature on principle-based regulation is used as the basis for a qualitative thematic analysis to compare and contrast the 12 principles of self-regulation with more recently published work. A mapping of terms based on a detailed description of the principles used in the various research and policy documents was generated. This mapping forms the basis of a critique of the current ICN principles. A professional self-regulation advocated by the ICN were identified. A revised and extended set of 13 principles is needed if contemporary developments in the field of regulatory frameworks are to be accommodated. These revised principles should be considered for adoption by the ICN to underpin their advocacy work on professional self-regulation. © 2013 The Authors. International Nursing Review © 2013 International Council of Nurses.

  5. An electroweak basis for neutrinoless double β decay

    DOE PAGES

    Graesser, Michael L.

    2017-08-23

    Here, a discovery of neutrinoless double-β decay would be profound, providing the first direct experimental evidence of ΔL = 2 lepton number violating processes. While a natural explanation is provided by an effective Majorana neutrino mass, other new physics interpretations should be carefully evaluated. At low-energies such new physics could man-ifest itself in the form of color and SU(2) L × U(1)Y invariant higher dimension operators. Here we determine a complete set of electroweak invariant dimension-9 operators, and our analysis supersedes those that only impose U(1) em invariance. Imposing electroweak invariance implies: 1) a significantly reduced set of leading ordermore » operators compared to only imposing U(1) em invariance; and 2) other collider signatures. Prior to imposing electroweak invariance we find a minimal basis of 24 dimension-9 operators, which is reduced to 11 electroweak invariant operators at leading order in the expansion in the Higgs vacuum expectation value. We set up a systematic analysis of the hadronic realization of the 4-quark operators using chiral perturbation theory, and apply it to determine which of these operators have long-distance pion enhancements at leading order in the chiral expansion. We also find at dimension-11 and dimension-13 the electroweak invariant operators that after electroweak symmetry breaking produce the remaining ΔL = 2 operators that would appear at dimension-9 if only U(1) em is imposed.« less

  6. Effective empirical corrections for basis set superposition error in the def2-SVPD basis: gCP and DFT-C

    NASA Astrophysics Data System (ADS)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2017-06-01

    With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.

  7. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  8. Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.

    PubMed

    Mitin, Alexander V

    2013-09-05

    The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.

  9. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. II. Application of the local basis equation.

    PubMed

    Ferenczy, György G

    2013-04-05

    The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.

  10. Clutter and target discrimination in forward-looking ground penetrating radar using sparse structured basis pursuits

    NASA Astrophysics Data System (ADS)

    Camilo, Joseph A.; Malof, Jordan M.; Torrione, Peter A.; Collins, Leslie M.; Morton, Kenneth D.

    2015-05-01

    Forward-looking ground penetrating radar (FLGPR) is a remote sensing modality that has recently been investigated for buried threat detection. FLGPR offers greater standoff than other downward-looking modalities such as electromagnetic induction and downward-looking GPR, but it suffers from high false alarm rates due to surface and ground clutter. A stepped frequency FLGPR system consists of multiple radars with varying polarizations and bands, each of which interacts differently with subsurface materials and therefore might potentially be able to discriminate clutter from true buried targets. However, it is unclear which combinations of bands and polarizations would be most useful for discrimination or how to fuse them. This work applies sparse structured basis pursuit, a supervised statistical model which searches for sets of bands that are collectively effective for discriminating clutter from targets. The algorithm works by trying to minimize the number of selected items in a dictionary of signals; in this case the separate bands and polarizations make up the dictionary elements. A structured basis pursuit algorithm is employed to gather groups of modes together in collections to eliminate whole polarizations or sensors. The approach is applied to a large collection of FLGPR data for data around emplaced target and non-target clutter. The results show that a sparse structure basis pursuits outperforms a conventional CFAR anomaly detector while also pruning out unnecessary bands of the FLGPR sensor.

  11. Neural basis of postural instability identified by VTC and EEG

    PubMed Central

    Cao, Cheng; Jaiswal, Niharika; Newell, Karl M.

    2010-01-01

    In this study, we investigated the neural basis of virtual time to contact (VTC) and the hypothesis that VTC provides predictive information for future postural instability. A novel approach to differentiate stable pre-falling and transition-to-instability stages within a single postural trial while a subject was performing a challenging single leg stance with eyes closed was developed. Specifically, we utilized wavelet transform and stage segmentation algorithms using VTC time series data set as an input. The VTC time series was time-locked with multichannel (n = 64) EEG signals to examine its underlying neural substrates. To identify the focal sources of neural substrates of VTC, a two-step approach was designed combining the independent component analysis (ICA) and low-resolution tomography (LORETA) of multichannel EEG. There were two major findings: (1) a significant increase of VTC minimal values (along with enhanced variability of VTC) was observed during the transition-to-instability stage with progression to ultimate loss of balance and falling; and (2) this VTC dynamics was associated with pronounced modulation of EEG predominantly within theta, alpha and gamma frequency bands. The sources of this EEG modulation were identified at the cingulate cortex (ACC) and the junction of precuneus and parietal lobe, as well as at the occipital cortex. The findings support the hypothesis that the systematic increase of minimal values of VTC concomitant with modulation of EEG signals at the frontal-central and parietal–occipital areas serve collectively to predict the future instability in posture. PMID:19655130

  12. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  13. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  14. Current trends in treatment of obesity in Karachi and possibilities of cost minimization.

    PubMed

    Hussain, Mirza Izhar; Naqvi, Baqir Shyum

    2015-03-01

    Our study finds out drug usage trends in over weight and obese patients without any compelling indications in Karachi, looks for deviations of current practices from evidence based antihypertensive therapeutic guidelines and identifies not only cost minimization opportunities but also communication strategies to improve patients' awareness and compliance to achieve therapeutic goal. In present study two sets were used. Randomized stratified independent surveys were conducted in hospital doctors and family physicians (general practitioners), using pretested questionnaires. Sample size was 100. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Opportunities of cost minimization were also analyzed. One the basis of doctors' feedback, preference is given to non-pharmacologic management of obesity. Mass media campaign and media usage were recommended to increase patients awareness and patients' education along with strengthening family support systems was recommended for better compliance of the patients to doctor's advice. Local therapeutic guidelines for weight reduction were not found. Feedbacks showed that global therapeutic guidelines were followed by the doctors practicing in the community and hospitals in Karachi. However, high price branded drugs were used instead of low priced generic therapeutic equivalents. Patient's education is required for better awareness and improving patients' compliance. The doctors found preferring brand leaders instead of low cost options. This trend increases cost of therapy by 0.59 to 4.17 times. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  15. Derivation of a formula for the resonance integral for a nonorthogonal basis set

    PubMed Central

    Yim, Yung-Chang; Eyring, Henry

    1981-01-01

    In a self-consistent field calculation, a formula for the off-diagonal matrix elements of the core Hamiltonian is derived for a nonorthogonal basis set by a polyatomic approach. A set of parameters is then introduced for the repulsion integral formula of Mataga-Nishimoto to fit the experimental data. The matrix elements computed for the nonorthogonal basis set in the π-electron approximation are transformed to those for an orthogonal basis set by the Löwdin symmetrical orthogonalization. PMID:16593009

  16. Developing a provisional, international minimal dataset for Juvenile Dermatomyositis: for use in clinical practice to inform research.

    PubMed

    McCann, Liza J; Arnold, Katie; Pilkington, Clarissa A; Huber, Adam M; Ravelli, Angelo; Beard, Laura; Beresford, Michael W; Wedderburn, Lucy R

    2014-01-01

    Juvenile dermatomyositis (JDM) is a rare but severe autoimmune inflammatory myositis of childhood. International collaboration is essential in order to undertake clinical trials, understand the disease and improve long-term outcome. The aim of this study was to propose from existing collaborative initiatives a preliminary minimal dataset for JDM. This will form the basis of the future development of an international consensus-approved minimum core dataset to be used both in clinical care and inform research, allowing integration of data between centres. A working group of internationally-representative JDM experts was formed to develop a provisional minimal dataset. Clinical and laboratory variables contained within current national and international collaborative databases of patients with idiopathic inflammatory myopathies were scrutinised. Judgements were informed by published literature and a more detailed analysis of the Juvenile Dermatomyositis Cohort Biomarker Study and Repository, UK and Ireland. A provisional minimal JDM dataset has been produced, with an associated glossary of definitions. The provisional minimal dataset will request information at time of patient diagnosis and during on-going prospective follow up. At time of patient diagnosis, information will be requested on patient demographics, diagnostic criteria and treatments given prior to diagnosis. During on-going prospective follow-up, variables will include the presence of active muscle or skin disease, major organ involvement or constitutional symptoms, investigations, treatment, physician global assessments and patient reported outcome measures. An internationally agreed minimal dataset has the potential to significantly enhance collaboration, allow effective communication between groups, provide a minimal standard of care and enable analysis of the largest possible number of JDM patients to provide a greater understanding of this disease. This preliminary dataset can now be developed into a consensus-approved minimum core dataset and tested in a wider setting with the aim of achieving international agreement.

  17. Developing a provisional, international Minimal Dataset for Juvenile Dermatomyositis: for use in clinical practice to inform research

    PubMed Central

    2014-01-01

    Background Juvenile dermatomyositis (JDM) is a rare but severe autoimmune inflammatory myositis of childhood. International collaboration is essential in order to undertake clinical trials, understand the disease and improve long-term outcome. The aim of this study was to propose from existing collaborative initiatives a preliminary minimal dataset for JDM. This will form the basis of the future development of an international consensus-approved minimum core dataset to be used both in clinical care and inform research, allowing integration of data between centres. Methods A working group of internationally-representative JDM experts was formed to develop a provisional minimal dataset. Clinical and laboratory variables contained within current national and international collaborative databases of patients with idiopathic inflammatory myopathies were scrutinised. Judgements were informed by published literature and a more detailed analysis of the Juvenile Dermatomyositis Cohort Biomarker Study and Repository, UK and Ireland. Results A provisional minimal JDM dataset has been produced, with an associated glossary of definitions. The provisional minimal dataset will request information at time of patient diagnosis and during on-going prospective follow up. At time of patient diagnosis, information will be requested on patient demographics, diagnostic criteria and treatments given prior to diagnosis. During on-going prospective follow-up, variables will include the presence of active muscle or skin disease, major organ involvement or constitutional symptoms, investigations, treatment, physician global assessments and patient reported outcome measures. Conclusions An internationally agreed minimal dataset has the potential to significantly enhance collaboration, allow effective communication between groups, provide a minimal standard of care and enable analysis of the largest possible number of JDM patients to provide a greater understanding of this disease. This preliminary dataset can now be developed into a consensus-approved minimum core dataset and tested in a wider setting with the aim of achieving international agreement. PMID:25075205

  18. Many Denjoy minimal sets for monotone recurrence relations

    NASA Astrophysics Data System (ADS)

    Wang, Ya-Nan; Qin, Wen-Xin

    2014-09-01

    We extend Mather's work (1985 Comment. Math. Helv. 60 508-57) to high-dimensional cylinder maps defined by monotone recurrence relations, e.g. the generalized Frenkel-Kontorova model with finite range interactions. We construct uncountably many Denjoy minimal sets provided that the Birkhoff minimizers with some irrational rotation number ω do not form a foliation.

  19. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    NASA Technical Reports Server (NTRS)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  20. Approximations to complete basis set-extrapolated, highly correlated non-covalent interaction energies.

    PubMed

    Mackie, Iain D; DiLabio, Gino A

    2011-10-07

    The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics

  1. Communication: A novel implementation to compute MP2 correlation energies without basis set superposition errors and complete basis set extrapolation.

    PubMed

    Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario

    2017-06-07

    By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.

  2. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Hudson, Howard Gerald

    When emitters of electromagnetic energy are operated in the vicinity of sensitive components, the electric field at the component location must be kept below a certain level in order to prevent the component from being damaged, or in the case of electro-explosive devices, initiating. The V-Curve is a convenient way to set the electric field limit because it requires minimal information about the problem configuration. In this report we will discuss the basis for the V-Curve. We also consider deviations from the original V-Curve resulting from inductive versus capacitive antennas, increases in directivity gain for long antennas, decreases in inputmore » impedance when operating in a bounded region, and mismatches dictated by transmission line losses. In addition, we consider mitigating effects resulting from limited antenna sizes.« less

  4. Fragmentation dynamics of ionized neon trimer inside helium nanodroplets: a theoretical study.

    PubMed

    Bonhommeau, David; Viel, Alexandra; Halberstadt, Nadine

    2004-06-22

    We report a theoretical study of the fragmentation dynamics of Ne(3) (+) inside helium nanodroplets, following vertical ionization of the neutral neon trimer. The motion of the neon atoms is treated classically, while transitions between the electronic states of the ionic cluster are treated quantum mechanically. A diatomics-in-molecules description of the potential energy surfaces is used, in a minimal basis set consisting of three effective p orbitals on each neon atom for the missing electron. The helium environment is modeled by a friction force acting on the neon atoms when their speed exceeds the Landau velocity. A reasonable range of values for the corresponding friction coefficient is obtained by comparison with existing experimental measurements. (c) 2004 American Institute of Physics.

  5. Entropy reduction via simplified image contourization

    NASA Technical Reports Server (NTRS)

    Turner, Martin J.

    1993-01-01

    The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.

  6. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  7. Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals.

    PubMed

    Duchemin, Ivan; Li, Jing; Blase, Xavier

    2017-03-14

    The introduction of auxiliary bases to approximate molecular orbital products has paved the way to significant savings in the evaluation of four-center two-electron Coulomb integrals. We present a generalized dual space strategy that sheds a new light on variants over the standard density and Coulomb-fitting schemes, including the possibility of introducing minimization constraints. We improve in particular the charge- or multipole-preserving strategies introduced respectively by Baerends and Van Alsenoy that we compare to a simple scheme where the Coulomb metric is used for lowest angular momentum auxiliary orbitals only. We explore the merits of these approaches on the basis of extensive Hartree-Fock and MP2 calculations over a standard set of medium size molecules.

  8. Correlation consistent valence basis sets for use with the Stuttgart-Dresden-Bonn relativistic effective core potentials: The atoms Ga-Kr and In-Xe

    NASA Astrophysics Data System (ADS)

    Martin, Jan M. L.; Sundermann, Andreas

    2001-02-01

    We propose large-core correlation-consistent (cc) pseudopotential basis sets for the heavy p-block elements Ga-Kr and In-Xe. The basis sets are of cc-pVTZ and cc-pVQZ quality, and have been optimized for use with the large-core (valence-electrons only) Stuttgart-Dresden-Bonn (SDB) relativistic pseudopotentials. Validation calculations on a variety of third-row and fourth-row diatomics suggest them to be comparable in quality to the all-electron cc-pVTZ and cc-pVQZ basis sets for lighter elements. Especially the SDB-cc-pVQZ basis set in conjunction with a core polarization potential (CPP) yields excellent agreement with experiment for compounds of the later heavy p-block elements. For accurate calculations on Ga (and, to a lesser extent, Ge) compounds, explicit treatment of 13 valence electrons appears to be desirable, while it seems inevitable for In compounds. For Ga and Ge, we propose correlation consistent basis sets extended for (3d) correlation. For accurate calculations on organometallic complexes of interest to homogenous catalysis, we recommend a combination of the standard cc-pVTZ basis set for first- and second-row elements, the presently derived SDB-cc-pVTZ basis set for heavier p-block elements, and for transition metals, the small-core [6s5p3d] Stuttgart-Dresden basis set-relativistic effective core potential combination supplemented by (2f1g) functions with exponents given in the Appendix to the present paper.

  9. Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG

    NASA Astrophysics Data System (ADS)

    Stoudenmire, Edwin Miles; White, Steven

    We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.

  10. Functional Bregman Divergence and Bayesian Estimation of Distributions (Preprint)

    DTIC Science & Technology

    2008-01-01

    shows that if the set of possible minimizers A includes EPF [F ], then g∗ = EPF [F ] minimizes the expectation of any Bregman divergence. Note the theorem...probability distribution PF defined over the set M. Let A be a set of functions that includes EPF [F ] if it exists. Suppose the function g∗ minimizes...the expected Bregman divergence between the random function F and any function g ∈ A such that g∗ = arg inf g∈A EPF [dφ(F, g)]. Then, if g∗ exists

  11. Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard; Attele, Rohan

    2011-01-01

    Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.

  12. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    PubMed

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results insignificantly for iron(II) porphyrin coordinated with imidazole. Poor performance of a "locally dense" basis set with a large number of basis functions on the Fe center was observed in calculation of quintet-triplet gaps. Our results lead to a series of suggestions for density functional theory calculations of quintet-triplet energy gaps in ferrohemes with a single axial imidazole; these suggestions are potentially applicable for other transition-metal complexes.

  13. Minimal measures for Euler-Lagrange flows on finite covering spaces

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Xia, Zhihong

    2016-12-01

    In this paper we study the minimal measures for positive definite Lagrangian systems on compact manifolds. We are particularly interested in manifolds with more complicated fundamental groups. Mather’s theory classifies the minimal or action-minimizing measures according to the first (co-)homology group of a given manifold. We extend Mather’s notion of minimal measures to a larger class for compact manifolds with non-commutative fundamental groups, and use finite coverings to study the structure of these extended minimal measures. We also define action-minimizers and minimal measures in the homotopical sense. Our program is to study the structure of homotopical minimal measures by considering Mather’s minimal measures on finite covering spaces. Our goal is to show that, in general, manifolds with a non-commutative fundamental group have a richer set of minimal measures, hence a richer dynamical structure. As an example, we study the geodesic flow on surfaces of higher genus. Indeed, by going to the finite covering spaces, the set of minimal measures is much larger and more interesting.

  14. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  15. Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerner, Boris S.

    It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (formore » example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.« less

  16. Improved Potential Energy Surface of Ozone Constructed Using the Fitting by Permutationally Invariant Polynomial Function

    DOE PAGES

    Ayouz, Mehdi; Babikov, Dmitri

    2012-01-01

    New global potential energy surface for the ground electronic state of ozone is constructed at the complete basis set level of the multireference configuration interaction theory. A method of fitting the data points by analytical permutationally invariant polynomial function is adopted. A small set of 500 points is preoptimized using the old surface of ozone. In this procedure the positions of points in the configuration space are chosen such that the RMS deviation of the fit is minimized. New ab initio calculations are carried out at these points and are used to build new surface. Additional points are added tomore » the vicinity of the minimum energy path in order to improve accuracy of the fit, particularly in the region where the surface of ozone exhibits a shallow van der Waals well. New surface can be used to study formation of ozone at thermal energies and its spectroscopy near the dissociation threshold.« less

  17. Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Solakiewicz, R.; Attele, R.; Koshak, W.

    2011-01-01

    At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.

  18. Gestational surrogacy and the role of routine embryo screening: Current challenges and future directions for preimplantation genetic testing.

    PubMed

    Sills, E Scott; Anderson, Robert E; McCaffrey, Mary; Li, Xiang; Arrach, Nabil; Wood, Samuel H

    2016-03-01

    Preimplantation genetic screening (PGS) is a component of IVF entailing selection of an embryo for transfer on the basis of chromosomal normalcy. If PGS were integrated with single embryo transfer (SET) in a surrogacy setting, this approach could improve pregnancy rates, minimize miscarriage risk, and limit multiple gestations. Even without PGS, pregnancy rates for IVF surrogacy cases are generally satisfactory, especially when treatment utilizes embryos derived from young oocytes and transferred to a healthy surrogate. However, there could be a more general role for PGS in surrogacy, since background aneuploidy in embryos remains a major factor driving implantation failure and miscarriage for all infertility patients. At present, the proportion of IVF cases involving GS is limited, while the number of IVF patients requesting PGS appears to be increasing. In this report, the relevance of PGS for surrogacy in the rapidly changing field of assisted fertility medicine is discussed. © 2015 Wiley Periodicals, Inc.

  19. Assessment of the instantaneous unit hydrograph derived from the theory of topologically random networks

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    An instantaneous unit hydrograph (iuh) based on the theory of topologically random networks (topological iuh) is evaluated in terms of sets of basin characteristics and hydraulic parameters. Hydrographs were computed using two linear routing methods for each of two drainage basins in the southeastern United States and are the basis of comparison for the topological iuh's. Elements in the sets of basin characteristics for the topological iuh's are the number of first-order streams only, (N), or the nuber of sources together with the number of channel links in the topological diameter (N, D); the hydraulic parameters are values of the celerity and diffusivity constant. Sensitivity analyses indicate that the mean celerity of the internal links in the network is the critical hydraulic parameter for determining the shape of the topological iuh, while the diffusivity constant has minimal effect on the topological iuh. Asymptotic results (source-only) indicate the number of sources need not be large to approximate the topological iuh with the Weibull probability density function.

  20. Localized basis sets for unbound electrons in nanoelectronics.

    PubMed

    Soriano, D; Jacob, D; Palacios, J J

    2008-02-21

    It is shown how unbound electron wave functions can be expanded in a suitably chosen localized basis sets for any desired range of energies. In particular, we focus on the use of Gaussian basis sets, commonly used in first-principles codes. The possible usefulness of these basis sets in a first-principles description of field emission or scanning tunneling microscopy at large bias is illustrated by studying a simpler related phenomenon: The lifetime of an electron in a H atom subjected to a strong electric field.

  1. Near Hartree-Fock quality GTO basis sets for the second-row atoms

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1987-01-01

    Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.

  2. Relativistic Prolapse-Free Gaussian Basis Sets of Quadruple-ζ Quality: (aug-)RPF-4Z. III. The f-Block Elements.

    PubMed

    Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade

    2017-03-14

    The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .

  3. Combination of large and small basis sets in electronic structure calculations on large systems

    NASA Astrophysics Data System (ADS)

    Røeggen, Inge; Gao, Bin

    2018-04-01

    Two basis sets—a large and a small one—are associated with each nucleus of the system. Each atom has its own separate one-electron basis comprising the large basis set of the atom in question and the small basis sets for the partner atoms in the complex. The perturbed atoms in molecules and solids model is at core of the approach since it allows for the definition of perturbed atoms in a system. It is argued that this basis set approach should be particularly useful for periodic systems. Test calculations are performed on one-dimensional arrays of H and Li atoms. The ground-state energy per atom in the linear H array is determined versus bond length.

  4. Minimally invasive extravesical ureteral reimplantation for vesicoureteral reflux.

    PubMed

    Chen, Hsiao-Wen; Lin, Ghi-Jen; Lai, Ching-Horng; Chu, Sheng-Hsien; Chuang, Cheng-Keng

    2002-04-01

    We designed a new extravesical ureteral reimplantation technique with a minimally invasive approach from skin to ureterovesical junction with less perivesical tissue manipulation to avoid extensive bladder denervation. Between July 1996 and December 2000, 37 boys and 52 girls 1.2 to 10.8 years old (mean age plus or minus standard deviation 3.8 +/- 2.5) (113 ureters) were treated with minimally invasive extravesical ureteral reimplantation. Vesicoureteral reflux was graded I to V in 8, 12, 43, 29 and 21 cases, respectively. The technique involves an approximately 10 to 15 mm. incision passing through the small triangular gap of the aponeurosis of the external abdominal oblique muscle and transversalis fascia to the point of the ureterovesical junction. The surgical field was exposed with mini-retractors and fine dissecting instruments were used to avoid unnecessary tissue manipulation. At postoperative followup 1 patient had persistent grade II reflux and 2 had moderate hydronephrosis and hydroureter, which resolved after 18 months. No patient returned due to voiding inefficiency or for pain control after discharge from the outpatient setting. This new technique can be easily used for vesicoureteral reflux with the advantages of simple intervention for surgeons, especially those with inguinal herniorrhaphy and antireflux surgery experience, and less wound discomfort for patients. The whole procedure can be performed on an outpatient basis. However, the decision to use this technique should be based on individual consideration.

  5. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  6. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    NASA Technical Reports Server (NTRS)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  7. On basis set superposition error corrected stabilization energies for large n-body clusters.

    PubMed

    Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael

    2011-10-07

    In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics

  8. High quality Gaussian basis sets for fourth-row atoms

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Faegri, Knut, Jr.

    1992-01-01

    Energy optimized Gaussian basis sets of triple-zeta quality for the atoms Rb-Xe have been derived. Two series of basis sets are developed: (24s 16p 10d) and (26s 16p 10d) sets which were expanded to 13d and 19p functions as the 4d and 5p shells become occupied. For the atoms lighter than Cd, the (24s 16p 10d) sets with triple-zeta valence distributions are higher in energy than the corresponding double-zeta distribution. To ensure a triple-zeta distribution and a global energy minimum, the (26s 16p 10d) sets were derived. Total atomic energies from the largest basis sets are between 198 and 284 (mu)E(sub H) above the numerical Hartree-Fock energies.

  9. An optimized proportional-derivative controller for the human upper extremity with gravity.

    PubMed

    Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F

    2015-10-15

    When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.

  10. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.

  11. Relativistic well-tempered Gaussian basis sets for helium through mercury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, S.; Matsuoka, O.

    1989-10-01

    Exponent parameters of the nonrelativistically optimized well-tempered Gaussian basis sets of Huzinaga and Klobukowski have been employed for Dirac--Fock--Roothaan calculations without their reoptimization. For light atoms He (atomic number {ital Z}=2)--Rh ({ital Z}=45), the number of exponent parameters used has been the same as the nonrelativistic basis sets and for heavier atoms Pd ({ital Z}=46)--Hg({ital Z}=80), two 2{ital p} (and three 3{ital d}) Gaussian basis functions have been augmented. The scheme of kinetic energy balance and the uniformly charged sphere model of atomic nuclei have been adopted. The qualities of the calculated basis sets are close to the Dirac--Fock limit.

  12. Climate Intervention as an Optimization Problem

    NASA Astrophysics Data System (ADS)

    Caldeira, Ken; Ban-Weiss, George A.

    2010-05-01

    Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.

  13. Performance assessment of density functional methods with Gaussian and Slater basis sets using 7σ orbital momentum distributions of N2O

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Pang, Wenning; Duffy, Patrick

    2012-12-01

    Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.

  14. A Comparison of the Behavior of Functional/Basis Set Combinations for Hydrogen-Bonding in the Water Dimer with Emphasis on Basis Set Superposition Error

    PubMed Central

    Plumley, Joshua A.; Dannenberg, J. J.

    2011-01-01

    We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise corrected PES. The calculated ΔE's with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, due to error compensation, the smaller basis sets gave the best results (in comparison to experimental and high level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. Since many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: 1) D95(d,p) with B3LYP, B97D, M06 or MPWB1k; 2) 6-311G(d,p) with B3LYP; 3) D95++(d,p) with B3LYP, B97D or MPWB1K; 4)6-311++G(d,p) with B3LYP or B97D; and 5) aug-cc-pVDZ with M05-2X, M06-2X or X3LYP. PMID:21328398

  15. A comparison of the behavior of functional/basis set combinations for hydrogen-bonding in the water dimer with emphasis on basis set superposition error.

    PubMed

    Plumley, Joshua A; Dannenberg, J J

    2011-06-01

    We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.

  16. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    NASA Astrophysics Data System (ADS)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-01

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.

  17. Characterizing and Understanding the Remarkably Slow Basis Set Convergence of Several Minnesota Density Functionals for Intermolecular Interaction Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2013-08-22

    For a set of eight equilibrium intermolecular complexes, it is discovered in this paper that the basis set limit (BSL) cannot be reached by aug-cc-pV5Z for three of the Minnesota density functionals: M06-L, M06-HF, and M11-L. In addition, the M06 and M11 functionals exhibit substantial, but less severe, difficulties in reaching the BSL. By using successively finer grids, it is demonstrated that this issue is not related to the numerical integration of the exchange-correlation functional. In addition, it is shown that the difficulty in reaching the BSL is not a direct consequence of the structure of the augmented functions inmore » Dunning’s basis sets, since modified augmentation yields similar results. By using a very large custom basis set, the BSL appears to be reached for the HF dimer for all of the functionals. As a result, it is concluded that the difficulties faced by several of the Minnesota density functionals are related to an interplay between the form of these functionals and the structure of standard basis sets. It is speculated that the difficulty in reaching the basis set limit is related to the magnitude of the inhomogeneity correction factor (ICF) of the exchange functional. A simple modification of the M06-L exchange functional that systematically reduces the basis set superposition error (BSSE) for the HF dimer in the aug-cc-pVQZ basis set is presented, further supporting the speculation that the difficulty in reaching the BSL is caused by the magnitude of the exchange functional ICF. In conclusion, the BSSE is plotted with respect to the internuclear distance of the neon dimer for two of the examined functionals.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miliordos, Evangelos; Aprà, Edoardo; Xantheas, Sotiris S.

    We establish a new estimate for the binding energy between two benzene molecules in the parallel-displaced (PD) conformation by systematically converging (i) the intra- and intermolecular geometry at the minimum, (ii) the expansion of the orbital basis set, and (iii) the level of electron correlation. The calculations were performed at the second-order Møller–Plesset perturbation (MP2) and the coupled cluster including singles, doubles, and a perturbative estimate of triples replacement [CCSD(T)] levels of electronic structure theory. At both levels of theory, by including results corrected for basis set superposition error (BSSE), we have estimated the complete basis set (CBS) limit bymore » employing the family of Dunning’s correlation-consistent polarized valence basis sets. The largest MP2 calculation was performed with the cc-pV6Z basis set (2772 basis functions), whereas the largest CCSD(T) calculation was with the cc-pV5Z basis set (1752 basis functions). The cluster geometries were optimized with basis sets up to quadruple-ζ quality, observing that both its intra- and intermolecular parts have practically converged with the triple-ζ quality sets. The use of converged geometries was found to play an important role for obtaining accurate estimates for the CBS limits. Our results demonstrate that the binding energies with the families of the plain (cc-pVnZ) and augmented (aug-cc-pVnZ) sets converge [within <0.01 kcal/mol for MP2 and <0.15 kcal/mol for CCSD(T)] to the same CBS limit. In addition, the average of the uncorrected and BSSE-corrected binding energies was found to converge to the same CBS limit much faster than either of the two constituents (uncorrected or BSSE-corrected binding energies). Due to the fact that the family of augmented basis sets (especially for the larger sets) causes serious linear dependency problems, the plain basis sets (for which no linear dependencies were found) are deemed as a more efficient and straightforward path for obtaining an accurate CBS limit. We considered extrapolations of the uncorrected (ΔE) and BSSE-corrected (ΔE cp) binding energies, their average value (ΔE ave), as well as the average of the latter over the plain and augmented sets (Δ~E ave) with the cardinal number of the basis set n. Our best estimate of the CCSD(T)/CBS limit for the π–π binding energy in the PD benzene dimer is D e = -2.65 ± 0.02 kcal/mol. The best CCSD(T)/cc-pV5Z calculated value is -2.62 kcal/mol, just 0.03 kcal/mol away from the CBS limit. For comparison, the MP2/CBS limit estimate is -5.00 ± 0.01 kcal/mol, demonstrating a 90% overbinding with respect to CCSD(T). Finally, the spin-component-scaled (SCS) MP2 variant was found to closely reproduce the CCSD(T) results for each basis set, while scaled opposite spin (SOS) MP2 yielded results that are too low when compared to CCSD(T).« less

  19. Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)

    1998-01-01

    The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.

  20. Conformational analysis of cellobiose by electronic structure theories.

    PubMed

    French, Alfred D; Johnson, Glenn P; Cramer, Christopher J; Csonka, Gábor I

    2012-03-01

    Adiabatic Φ/ψ maps for cellobiose were prepared with B3LYP density functional theory. A mixed basis set was used for minimization, followed with 6-31+G(d) single-point calculations, with and without SMD continuum solvation. Different arrangements of the exocyclic groups (38 starting geometries) were considered for each Φ/ψ point. The vacuum calculations agreed with earlier computational and experimental results on the preferred gas phase conformation (anti-Φ(H), syn-ψ(H)), and the results from the solvated calculations were consistent with the (syn Φ(H)/ψ(H) conformations from condensed phases (crystals or solutions). Results from related studies were compared, and there is substantial dependence on the solvation model as well as arrangements of exocyclic groups. New stabilizing interactions were revealed by Atoms-In-Molecules theory. Published by Elsevier Ltd.

  1. Calculating Interaction Energies Using First Principle Theories: Consideration of Basis Set Superposition Error and Fragment Relaxation

    ERIC Educational Resources Information Center

    Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.

    2007-01-01

    The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.

  2. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  3. Identifying finite-time coherent sets from limited quantities of Lagrangian data.

    PubMed

    Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W

    2015-08-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  4. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods basedmore » on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.« less

  5. The effect of diffuse basis functions on valence bond structural weights

    NASA Astrophysics Data System (ADS)

    Galbraith, John Morrison; James, Andrew M.; Nemes, Coleen T.

    2014-03-01

    Structural weights and bond dissociation energies have been determined for H-F, H-X, and F-X molecules (-X = -OH, -NH2, and -CH3) at the valence bond self-consistent field (VBSCF) and breathing orbital valence bond (BOVB) levels of theory with the aug-cc-pVDZ and 6-31++G(d,p) basis sets. At the BOVB level, the aug-cc-pVDZ basis set yields a counterintuitive ordering of ionic structural weights when the initial heavy atom s-type basis functions are included. For H-F, H-OH, and F-X, the ordering follows chemical intuition when these basis functions are not included. These counterintuitive weights are shown to be a result of the diffuse polarisation function on one VB fragment being spatially located, in part, on the other VB fragment. Except in the case of F-CH3, this problem is corrected with the 6-31++G(d,p) basis set. The initial heavy atom s-type functions are shown to make an important contribution to the VB orbitals and bond dissociation energies and, therefore, should not be excluded. It is recommended to not use diffuse basis sets in valence bond calculations unless absolutely necessary. If diffuse basis sets are needed, the 6-31++G(d,p) basis set should be used with caution and the structural weights checked against VBSCF values which have been shown to follow the expected ordering in all cases.

  6. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  7. Sparsest representations and approximations of an underdetermined linear system

    NASA Astrophysics Data System (ADS)

    Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier

    2018-05-01

    In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.

  8. Post Hoc Analysis of Data from Two Clinical Trials Evaluating the Minimal Clinically Important Change in International Restless Legs Syndrome Sum Score in Patients with Restless Legs Syndrome (Willis-Ekbom Disease)

    PubMed Central

    Ondo, William G.; Grieger, Frank; Moran, Kimberly; Kohnen, Ralf; Roth, Thomas

    2016-01-01

    Study Objectives: Determine the minimal clinically important change (MCIC), a measure determining the minimum change in scale score perceived as clinically beneficial, for the international restless legs syndrome (IRLS) and restless legs syndrome 6-item questionnaire (RLS-6) in patients with moderate to severe restless legs syndrome (RLS/Willis-Ekbom disease) treated with the rotigotine transdermal system. Methods: This post hoc analysis analyzed data from two 6-mo randomized, double-blind, placebo-controlled studies (SP790 [NCT00136045]; SP792 [NCT00135993]) individually and as a pooled analysis in rotigotine-treated patients, with baseline and end of maintenance IRLS and Clinical Global Impressions of change (CGI Item 2) scores available for analysis. An anchor-based approach and receiver operating characteristic (ROC) curves were used to determine the MCIC for the IRLS and RLS-6. We specifically compared “much improved vs minimally improved,” “much improved/very much improved vs minimally improved or worse,” and “minimally improved or better vs no change or worse” on the CGI-2 using the full analysis set (data as observed). Results: The MCIC IRLS cut-off scores for SP790 and SP792 were similar. Using the pooled SP790+SP792 analysis, the MCIC total IRLS cut-off score (sensitivity, specificity) for “much improved vs minimally improved” was −9 (0.69, 0.66), for “much improved/very much improved vs minimally improved or worse” was −11 (0.81, 0.84), and for “minimally improved or better vs no change or worse” was −9 (0.79, 0.88). MCIC ROC cut-offs were also calculated for each RLS-6 item. Conclusions: In patients with RLS, the MCIC values derived in the current analysis provide a basis for defining meaningful clinical improvement based on changes in the IRLS and RLS-6 following treatment with rotigotine. Citation: Ondo WG, Grieger F, Moran K, Kohnen R, Roth T. Post hoc analysis of data from two clinical trials evaluating the minimal clinically important change in international restless legs syndrome sum score in patients with restless legs syndrome (Willis-Ekbom Disease). J Clin Sleep Med 2016;12(1):63–70. PMID:26446245

  9. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  10. The Minkowski sum of a zonotope and the Voronoi polytope of the root lattice E{sub 7}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grishukhin, Vyacheslav P

    2012-11-30

    We show that the Minkowski sum P{sub V}(E{sub 7})+Z(U) of the Voronoi polytope P{sub V}(E{sub 7}) of the root lattice E{sub 7} and the zonotope Z(U) is a 7-dimensional parallelohedron if and only if the set U consists of minimal vectors of the dual lattice E{sub 7}{sup *} up to scalar multiplication, and U does not contain forbidden sets. The minimal vectors of E{sub 7} are the vectors r of the classical root system E{sub 7}. If the r{sup 2}-norm of the roots is set equal to 2, then the scalar products of minimal vectors from the dual lattice onlymore » take the values {+-}1/2. A set of minimal vectors is referred to as forbidden if it consists of six vectors, and the directions of some of these vectors can be changed so as to obtain a set of six vectors with all the pairwise scalar products equal to 1/2. Bibliography: 11 titles.« less

  11. Field theory of hyperfluid

    NASA Astrophysics Data System (ADS)

    Ariki, Taketo

    2018-02-01

    A hyperfluid model is constructed on the basis of its action entirely free from external constraints, regarding the hyperfluid as a self-consistent classical field. Intrinsic hypermomentum is no longer a supplemental variable given by external constraints, but arises purely from the diffeomorphism covariance of dynamical field. The field-theoretic approach allows natural classification of a hyperfluid on the basis of its symmetry group and corresponding homogeneous space; scalar, spinor, vector, and tensor fluids are introduced as simple examples. Apart from phenomenological constraints, the theory predicts the hypermomentum exchange of fluid via field-theoretic interactions of various classes; fluid–fluid interactions, minimal and non-minimal SU(n) -gauge couplings, and coupling with metric-affine gravity are all successfully formulated within the classical regime.

  12. On the optimization of Gaussian basis sets

    NASA Astrophysics Data System (ADS)

    Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.

    2003-01-01

    A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.

  13. Regeneration of near-wall turbulence structures

    NASA Technical Reports Server (NTRS)

    Hamilton, James M.; Kim, John J.; Waleffe, Fabian A.

    1993-01-01

    An examination of the regeneration mechanisms of near-wall turbulence and an attempt to investigate the critical Reynolds number conjecture of Waleffe & Kim is presented. The basis is an extension of the 'minimal channel' approach of Jimenez and Moin which emphasizes the near-wall region and further reduces the complexity of the turbulent flow. Reduction of the flow Reynolds number to the minimum value which will allow turbulence to be sustained has the effect of reducing the ratio of the largest scales to the smallest scales or, equivalently, of causing the near-wall region to fill more of the area between the channel walls. In addition, since each wall may have an active near-wall region, half of the channel is always somewhat redundant. If a plane Couette flow is instead chosen as the base flow, this redundancy is eliminated: the mean shear of a plane Couette flow has a single sign, and at low Reynolds numbers, the two wall regions share a single set of structures. A minimal flow with these modifications possesses, by construction, the strongest constraints which allow sustained turbulence, producing a greatly simplified flow in which the regeneration process can be examined.

  14. A supercritical airfoil experiment

    NASA Technical Reports Server (NTRS)

    Mateer, G. G.; Seegmiller, H. L.; Hand, L. A.; Szodruck, J.

    1994-01-01

    The purpose of this investigation is to provide a comprehensive data base for the validation of numerical simulations. The objective of the present paper is to provide a tabulation of the experimental data. The data were obtained in the two-dimensional, transonic flowfield surrounding a supercritical airfoil. A variety of flows were studied in which the boundary layer at the trailing edge of the model was either attached or separated. Unsteady flows were avoided by controlling the Mach number and angle of attack. Surface pressures were measured on both the model and wind tunnel walls, and the flowfield surrounding the model was documented using a laser Doppler velocimeter (LDV). Although wall interference could not be completely eliminated, its effect was minimized by employing the following techniques. Sidewall boundary layers were reduced by aspiration, and upper and lower walls were contoured to accommodate the flow around the model and the boundary-layer growth on the tunnel walls. A data base with minimal interference from a tunnel with solid walls provides an ideal basis for evaluating the development of codes for the transonic speed range because the codes can include the wall boundary conditions more precisely than interference connections can be made to the data sets.

  15. Specialized minimal PDFs for optimized LHC calculations.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes.

  16. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    PubMed

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  17. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  18. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  19. Rank-order-selective neurons form a temporal basis set for the generation of motor sequences.

    PubMed

    Salinas, Emilio

    2009-04-08

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain.

  20. RANK-ORDER-SELECTIVE NEURONS FORM A TEMPORAL BASIS SET FOR THE GENERATION OF MOTOR SEQUENCES

    PubMed Central

    Salinas, Emilio

    2009-01-01

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain. PMID:19357265

  1. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  2. Development of a highly maneuverable unmanned underwater vehicle on the basis of quad-copter dynamics

    NASA Astrophysics Data System (ADS)

    Amin, Osman Md; Karim, Md. Arshadul; Saad, Abdullah His

    2017-12-01

    At present, research on unmanned underwater vehicle (UUV) has become a significant & familiar topic for researchers from various engineering fields. UUV is of mainly two types - AUV (Autonomous Underwater vehicle) & ROV (Remotely Operated Vehicle). There exist a significant number of published research papers on UUV, where very few researchers emphasize on the ease of maneuvering and control of UUV. Maneuvering is important for underwater vehicle in avoiding obstacles, installing underwater piping system, searching undersea resources, underwater mine disposal operations, oceanographic surveys etc. A team from Dept. of Naval Architecture & Marine Engineering of MIST has taken a project to design a highly maneuverable unmanned underwater vehicle on the basis of quad-copter dynamics. The main objective of the research is to develop a control system for UUV which would be able to maneuver the vehicle in six DOF (Degrees of Freedom) with great ease. For this purpose we are not only focusing on controllability but also designing an efficient hull with minimal drag force & optimized propeller using CFD technique. Motors were selected on the basis of the simulated thrust generated by propellers in ANSYS Fluent software module. Settings for control parameters to carry out different types of maneuvering such as hovering, spiral, one point rotation about its centroid, gliding, rolling, drifting and zigzag motions were explained in short at the end.

  3. Basis set construction for molecular electronic structure theory: natural orbital and Gauss-Slater basis for smooth pseudopotentials.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2011-02-14

    A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.

  4. Auxiliary basis sets for density-fitting second-order Møller-Plesset perturbation theory: weighted core-valence correlation consistent basis sets for the 4d elements Y-Pd.

    PubMed

    Hill, J Grant

    2013-09-30

    Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit. Copyright © 2013 Wiley Periodicals, Inc.

  5. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  6. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE PAGES

    Zhang, Gaigong; Lin, Lin; Hu, Wei; ...

    2017-01-27

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  7. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin; Hu, Wei

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  8. Adaptive local basis set for Kohn-Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.

    2017-04-01

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  9. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    NASA Astrophysics Data System (ADS)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  10. Benchmark of Ab Initio Bethe-Salpeter Equation Approach with Numeric Atom-Centered Orbitals

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Kloppenburg, Jan; Kanai, Yosuke; Blum, Volker

    The Bethe-Salpeter equation (BSE) approach based on the GW approximation has been shown to be successful for optical spectra prediction of solids and recently also for small molecules. We here present an all-electron implementation of the BSE using numeric atom-centered orbital (NAO) basis sets. In this work, we present benchmark of BSE implemented in FHI-aims for low-lying excitation energies for a set of small organic molecules, the well-known Thiel's set. The difference between our implementation (using an analytic continuation of the GW self-energy on the real axis) and the results generated by a fully frequency dependent GW treatment on the real axis is on the order of 0.07 eV for the benchmark molecular set. We study the convergence behavior to the complete basis set limit for excitation spectra, using a group of valence correlation consistent NAO basis sets (NAO-VCC-nZ), as well as for standard NAO basis sets for ground state DFT with extended augmentation functions (NAO+aug). The BSE results and convergence behavior are compared to linear-response time-dependent DFT, where excellent numerical convergence is shown for NAO+aug basis sets.

  11. Kinetic balance and variational bounds failure in the solution of the Dirac equation in a finite Gaussian basis set

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Faegri, Knut, Jr.

    1990-01-01

    The paper investigates bounds failure in calculations using Gaussian basis sets for the solution of the one-electron Dirac equation for the 2p1/2 state of Hg(79+). It is shown that bounds failure indicates inadequacies in the basis set, both in terms of the exponent range and the number of functions. It is also shown that overrepresentation of the small component space may lead to unphysical results. It is concluded that it is important to use matched large and small component basis sets with an adequate size and exponent range.

  12. Ab Initio and Analytic Intermolecular Potentials for Ar-CF₄

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vayner, Grigoriy; Alexeev, Yuri; Wang, Jiangping

    2006-03-09

    Ab initio calculations at the CCSD(T) level of theory are performed to characterize the Ar + CF ₄ intermolecular potential. Extensive calculations, with and without a correction for basis set superposition error (BSSE), are performed with the cc-pVTZ basis set. Additional calculations are performed with other correlation consistent (cc) basis sets to extrapolate the Ar---CF₄potential energy minimum to the complete basis set (CBS) limit. Both the size of the basis set and BSSE have substantial effects on the Ar + CF₄ potential. Calculations with the cc-pVTZ basis set and without a BSSE correction, appear to give a good representation ofmore » the potential at the CBS limit and with a BSSE correction. In addition, MP2 theory is found to give potential energies in very good agreement with those determined by the much higher level CCSD(T) theory. Two analytic potential energy functions were determined for Ar + CF₄by fitting the cc-pVTZ calculations both with and without a BSSE correction. These analytic functions were written as a sum of two body potentials and excellent fits to the ab initio potentials were obtained by representing each two body interaction as a Buckingham potential.« less

  13. On the performance of large Gaussian basis sets for the computation of total atomization energies

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.

    1992-01-01

    The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirkov, Leonid; Makarewicz, Jan, E-mail: jama@amu.edu.pl

    An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowedmore » us to design an optimal basis set composed of a small Dunning’s basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results.« less

  15. Systems Biology Perspectives on Minimal and Simpler Cells

    PubMed Central

    Xavier, Joana C.; Patil, Kiran Raosaheb

    2014-01-01

    SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563

  16. Impact of cosmetic result on selection of surgical treatment in patients with localized prostate cancer.

    PubMed

    Rojo, María Alejandra Egui; Martinez-Salamanca, Juan Ignacio; Maestro, Mario Alvarez; Galarza, Ignacio Sola; Rodriguez, Joaquin Carballido

    2014-01-01

    To analyze the effect of cosmetic outcome as an isolated variable in patients undergoing surgical treatment based on the incision used in the 3 variants of radical prostatectomy: open (infraumbilical incision and Pfannestiel incision) and laparoscopic, or robotic (6 ports) surgery. 612 male patients 40 to 70 years of age with a negative history of prostate disease were invited to participate. Each patient was evaluated by questionnaire accompanied by a set of 6 photographs showing the cosmetic appearance of the 3 approaches, with and without undergarments. Participants ranked the approaches according to preference, on the basis of cosmesis. We also recorded demographic variables: age, body mass index, marital status, education level, and physical activity. Of the 577 patients who completed the questionnaries, the 6-port minimally invasive approach represents the option preferred by 52% of the participants, followed by the Pfannestiel incision (46%), and the infraumbilical incision (11%), respectively. The univariate and multivariate analyses did not show statistically significant differences when comparing the approach preferred by the patients and the sub-analyses for demographic variables, except for patients who exercised who preferred the Pfannestiel incision (58%) instead of minimally invasive approach (42%) with statistically significant differences. The minimally invasive approach was the approach of choice for the majority of patients in the treatment of prostate cancer. The Pfannestiel incision represents an acceptable alternative. More research and investment may be necesary to improve cosmetic outcomes.

  17. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE PAGES

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-25

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  18. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogel, Jaron T.; Reboredo, Fernando A.

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  19. Match-bounded String Rewriting Systems

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2003-01-01

    We introduce a new class of automated proof methods for the termination of rewriting systems on strings. The basis of all these methods is to show that rewriting preserves regular languages. To this end, letters are annotated with natural numbers, called match heights. If the minimal height of all positions in a redex is h+1 then every position in the reduct will get height h+1. In a match-bounded system, match heights are globally bounded. Using recent results on deleting systems, we prove that rewriting by a match-bounded system preserves regular languages. Hence it is decidable whether a given rewriting system has a given match bound. We also provide a sufficient criterion for the abence of a match-bound. The problem of existence of a match-bound is still open. Match-boundedness for all strings can be used as an automated criterion for termination, for match-bounded systems are terminating. This criterion can be strengthened by requiring match-boundedness only for a restricted set of strings, for instance the set of right hand sides of forward closures.

  20. Building a portable data and information interoperability infrastructure-framework for a standard Taiwan Electronic Medical Record Template.

    PubMed

    Jian, Wen-Shan; Hsu, Chien-Yeh; Hao, Te-Hui; Wen, Hsyien-Chia; Hsu, Min-Huei; Lee, Yen-Liang; Li, Yu-Chuan; Chang, Polun

    2007-11-01

    Traditional electronic health record (EHR) data are produced from various hospital information systems. They could not have existed independently without an information system until the incarnation of XML technology. The interoperability of a healthcare system can be divided into two dimensions: functional interoperability and semantic interoperability. Currently, no single EHR standard exists that provides complete EHR interoperability. In order to establish a national EHR standard, we developed a set of local EHR templates. The Taiwan Electronic Medical Record Template (TMT) is a standard that aims to achieve semantic interoperability in EHR exchanges nationally. The TMT architecture is basically composed of forms, components, sections, and elements. Data stored in the elements which can be referenced by the code set, data type, and narrative block. The TMT was established with the following requirements in mind: (1) transformable to international standards; (2) having a minimal impact on the existing healthcare system; (3) easy to implement and deploy, and (4) compliant with Taiwan's current laws and regulations. The TMT provides a basis for building a portable, interoperable information infrastructure for EHR exchange in Taiwan.

  1. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  2. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  3. 42 CFR 415.170 - Conditions for payment on a fee schedule basis for physician services in a teaching setting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...

  4. Recurrence formulas for fully exponentially correlated four-body wave functions

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.

    2009-03-01

    Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.

  5. Modulation of isochronous movements in a flexible environment: links between motion and auditory experience.

    PubMed

    Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego

    2014-06-01

    The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.

  6. Paving the COWpath: data-driven design of pediatric order sets

    PubMed Central

    Zhang, Yiye; Padman, Rema; Levin, James E

    2014-01-01

    Objective Evidence indicates that users incur significant physical and cognitive costs in the use of order sets, a core feature of computerized provider order entry systems. This paper develops data-driven approaches for automating the construction of order sets that match closely with user preferences and workflow while minimizing physical and cognitive workload. Materials and methods We developed and tested optimization-based models embedded with clustering techniques using physical and cognitive click cost criteria. By judiciously learning from users’ actual actions, our methods identify items for constituting order sets that are relevant according to historical ordering data and grouped on the basis of order similarity and ordering time. We evaluated performance of the methods using 47 099 orders from the year 2011 for asthma, appendectomy and pneumonia management in a pediatric inpatient setting. Results In comparison with existing order sets, those developed using the new approach significantly reduce the physical and cognitive workload associated with usage by 14–52%. This approach is also capable of accommodating variations in clinical conditions that affect order set usage and development. Discussion There is a critical need to investigate the cognitive complexity imposed on users by complex clinical information systems, and to design their features according to ‘human factors’ best practices. Optimizing order set generation using cognitive cost criteria introduces a new approach that can potentially improve ordering efficiency, reduce unintended variations in order placement, and enhance patient safety. Conclusions We demonstrate that data-driven methods offer a promising approach for designing order sets that are generalizable, data-driven, condition-based, and up to date with current best practices. PMID:24674844

  7. A potential energy surface for the process H2 + H2O yielding H + H + H2O - Ab initio calculations and analytical representation

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Walch, Stephen P.; Taylor, Peter R.

    1991-01-01

    Extensive ab initio calculations on the ground state potential energy surface of H2 + H2O were performed using a large contracted Gaussian basis set and a high level of correlation treatment. An analytical representation of the potential energy surface was then obtained which reproduces the calculated energies with an overall root-mean-square error of only 0.64 mEh. The analytic representation explicitly includes all nine internal degrees of freedom and is also well behaved as the H2 dissociates; it thus can be used to study collision-induced dissociation or recombination of H2. The strategy used to minimize the number of energy calculations is discussed, as well as other advantages of the present method for determining the analytical representation.

  8. Basis for calculating cross sections for nuclear magnetic resonance spin-modulated polarized neutron scattering.

    PubMed

    Kotlarchyk, Michael; Thurston, George M

    2016-12-28

    In this work we study the potential for utilizing the scattering of polarized neutrons from nuclei whose spin has been modulated using nuclear magnetic resonance (NMR). From first principles, we present an in-depth development of the differential scattering cross sections that would arise in such measurements from a hypothetical target system containing nuclei with non-zero spins. In particular, we investigate the modulation of the polarized scattering cross sections following the application of radio frequency pulses that impart initial transverse rotations to selected sets of spin-1/2 nuclei. The long-term aim is to provide a foundational treatment of the scattering cross section associated with enhancing scattering signals from selected nuclei using NMR techniques, thus employing minimal chemical or isotopic alterations, so as to advance the knowledge of macromolecular or liquid structure.

  9. Projected Hybrid Orbitals: A General QM/MM Method

    PubMed Central

    2015-01-01

    A projected hybrid orbital (PHO) method was described to model the covalent boundary in a hybrid quantum mechanical and molecular mechanical (QM/MM) system. The PHO approach can be used in ab initio wave function theory and in density functional theory with any basis set without introducing system-dependent parameters. In this method, a secondary basis set on the boundary atom is introduced to formulate a set of hybrid atomic orbtials. The primary basis set on the boundary atom used for the QM subsystem is projected onto the secondary basis to yield a representation that provides a good approximation to the electron-withdrawing power of the primary basis set to balance electronic interactions between QM and MM subsystems. The PHO method has been tested on a range of molecules and properties. Comparison with results obtained from QM calculations on the entire system shows that the present PHO method is a robust and balanced QM/MM scheme that preserves the structural and electronic properties of the QM region. PMID:25317748

  10. A novel Gaussian-Sinc mixed basis set for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.

    2015-08-14

    A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J. Grant, E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu; Peterson, Kirk A., E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu

    New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of themore » spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.« less

  12. Nonlinear transient analysis via energy minimization

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    The formulation basis for nonlinear transient analysis of finite element models of structures using energy minimization is provided. Geometric and material nonlinearities are included. The development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. The results indicate the effectiveness of the technique as a viable tool for this purpose.

  13. Monitoring intracranial pressure based on F-P

    NASA Astrophysics Data System (ADS)

    Cai, Ting; Tong, Xinglin; Chen, Guangxi

    2013-09-01

    Intracranial pressure is an important monitoring indicator of neurosurgery. In this paper we adopt all-fiber FP fiber optic sensor, using a minimally invasive operation to realize real-time dynamic monitoring intracranial pressure of the hemorrhage rats, and observe their intracranial pressure regularity of dynamic changes. Preliminary results verify the effectiveness of applications and feasibility, providing some basis for human brain minimally invasive intracranial pressure measurement.

  14. Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies

    PubMed Central

    Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung

    2017-01-01

    A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods’ performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. PMID:27993672

  15. Chirality measures of α-amino acids.

    PubMed

    Jamróz, Michał H; Rode, Joanna E; Ostrowski, Sławomir; Lipiński, Piotr F J; Dobrowolski, Jan Cz

    2012-06-25

    To measure molecular chirality, the molecule is treated as a finite set of points in the Euclidean R(3) space supplemented by k properties, p(1)((i)), p(2)((i)), ..., p(k)((i)) assigned to the ith atom, which constitute a point in the Property P(k) space. Chirality measures are described as the distance between a molecule and its mirror image minimized over all its arbitrary orientation-preserving isometries in the R(3) × P(k) Cartesian product space. Following this formalism, different chirality measures can be estimated by taking into consideration different sets of atomic properties. Here, for α-amino acid zwitterionic structures taken from the Cambridge Structural Database and for all 1684 neutral conformers of 19 biogenic α-amino acid molecules, except glycine and cystine, found at the B3LYP/6-31G** level, chirality measures have been calculated by a CHIMEA program written in this project. It is demonstrated that there is a significant correlation between the measures determined for the α-amino acid zwitterions in crystals and the neutral forms in the gas phase. Performance of the studied chirality measures with changes of the basis set and computation method was also checked. An exemplary quantitative structure–activity relationship (QSAR) application of the chirality measures was presented by an introductory model for the benchmark Cramer data set of steroidal ligands of the sex-hormone binding globulin.

  16. Conventional and Explicitly Correlated ab Initio Benchmark Study on Water Clusters: Revision of the BEGDB and WATER27 Data Sets.

    PubMed

    Manna, Debashree; Kesharwani, Manoj K; Sylvetsky, Nitai; Martin, Jan M L

    2017-07-11

    Benchmark ab initio energies for BEGDB and WATER27 data sets have been re-examined at the MP2 and CCSD(T) levels with both conventional and explicitly correlated (F12) approaches. The basis set convergence of both conventional and explicitly correlated methods has been investigated in detail, both with and without counterpoise corrections. For the MP2 and CCSD-MP2 contributions, rapid basis set convergence observed with explicitly correlated methods is compared to conventional methods. However, conventional, orbital-based calculations are preferred for the calculation of the (T) term, since it does not benefit from F12. CCSD(F12*) converges somewhat faster with the basis set than CCSD-F12b for the CCSD-MP2 term. The performance of various DFT methods is also evaluated for the BEGDB data set, and results show that Head-Gordon's ωB97X-V and ωB97M-V functionals outperform all other DFT functionals. Counterpoise-corrected DSD-PBEP86 and raw DSD-PBEPBE-NL also perform well and are close to MP2 results. In the WATER27 data set, the anionic (deprotonated) water clusters exhibit unacceptably slow basis set convergence with the regular cc-pVnZ-F12 basis sets, which have only diffuse s and p functions. To overcome this, we have constructed modified basis sets, denoted aug-cc-pVnZ-F12 or aVnZ-F12, which have been augmented with diffuse functions on the higher angular momenta. The calculated final dissociation energies of BEGDB and WATER27 data sets are available in the Supporting Information. Our best calculated dissociation energies can be reproduced through n-body expansion, provided one pushes to the basis set and electron correlation limit for the two-body term; for the three-body term, post-MP2 contributions (particularly CCSD-MP2) are important for capturing the three-body dispersion effects. Terms beyond four-body can be adequately captured at the MP2-F12 level.

  17. On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States

    NASA Technical Reports Server (NTRS)

    Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)

    1996-01-01

    Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.

  18. Systems biology perspectives on minimal and simpler cells.

    PubMed

    Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel

    2014-09-01

    The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  19. Perturbation corrections to Koopmans' theorem. V - A study with large basis sets

    NASA Technical Reports Server (NTRS)

    Chong, D. P.; Langhoff, S. R.

    1982-01-01

    The vertical ionization potentials of N2, F2 and H2O were calculated by perturbation corrections to Koopmans' theorem using six different basis sets. The largest set used includes several sets of polarization functions. Comparison is made with measured values and with results of computations using Green's functions.

  20. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  1. Raman spectral post-processing for oral tissue discrimination – a step for an automatized diagnostic system

    PubMed Central

    Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.

    2017-01-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115

  2. Raman spectral post-processing for oral tissue discrimination - a step for an automatized diagnostic system.

    PubMed

    Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A

    2017-11-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.

  3. Extrapolating MP2 and CCSD explicitly correlated correlation energies to the complete basis set limit with first and second row correlation consistent basis sets

    NASA Astrophysics Data System (ADS)

    Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim

    2009-11-01

    Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.

  4. An Alternate Set of Basis Functions for the Electromagnetic Solution of Arbitrarily-Shaped, Three-Dimensional, Closed, Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.

  5. A minimization principle for the description of modes associated with finite-time instabilities

    PubMed Central

    Babaee, H.

    2016-01-01

    We introduce a minimization formulation for the determination of a finite-dimensional, time-dependent, orthonormal basis that captures directions of the phase space associated with transient instabilities. While these instabilities have finite lifetime, they can play a crucial role either by altering the system dynamics through the activation of other instabilities or by creating sudden nonlinear energy transfers that lead to extreme responses. However, their essentially transient character makes their description a particularly challenging task. We develop a minimization framework that focuses on the optimal approximation of the system dynamics in the neighbourhood of the system state. This minimization formulation results in differential equations that evolve a time-dependent basis so that it optimally approximates the most unstable directions. We demonstrate the capability of the method for two families of problems: (i) linear systems, including the advection–diffusion operator in a strongly non-normal regime as well as the Orr–Sommerfeld/Squire operator, and (ii) nonlinear problems, including a low-dimensional system with transient instabilities and the vertical jet in cross-flow. We demonstrate that the time-dependent subspace captures the strongly transient non-normal energy growth (in the short-time regime), while for longer times the modes capture the expected asymptotic behaviour. PMID:27118900

  6. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  7. Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies.

    PubMed

    Liu, Jia; Duffy, Ben A; Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung

    2017-02-15

    A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods' performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  9. 40 CFR 60.57b - Siting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... on ambient air quality, visibility, soils, and vegetation. (2) The analysis shall consider air pollution control alternatives that minimize, on a site-specific basis, to the maximum extent practicable...

  10. Computational study of the electronic spectra of the rare gas fluorohydrides HRgF (Rg = Ar, Kr, Xe, Rn)

    NASA Astrophysics Data System (ADS)

    van Hoeve, Miriam D.; Klobukowski, Mariusz

    2018-03-01

    Simulation of the electronic spectra of HRgF (Rg = Ar, Kr, Xe, Rn) was carried out using the time-dependent density functional method, with the CAMB3LYP functional and several basis sets augmented with even-tempered diffuse functions. A full spectral assignment for the HRgF systems was done. The effect of the rare gas matrix on the HRgF (Rg = Ar and Kr) spectra was investigated and it was found that the matrix blue-shifted the spectra. Scalar relativistic effects on the spectra were also studied and it was found that while the excitation energies of HArF and HKrF were insignificantly affected by relativistic effects, most of the excitation energies of HXeF and HRnF were red-shifted. Spin-orbit coupling was found to significantly affect excitation energies in HRnF. Analysis of performance of the model core potential basis set relative to all-electron (AE) basis sets showed that the former basis set increased computational efficiency and gave results similar to those obtained with the AE basis set.

  11. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  12. Experimental and TD-DFT study of optical absorption of six explosive molecules: RDX, HMX, PETN, TNT, TATP, and HMTD.

    PubMed

    Cooper, Jason K; Grant, Christian D; Zhang, Jin Z

    2013-07-25

    Time dependent density function theory (TD-DFT) has been utilized to calculate the excitation energies and oscillator strengths of six common explosives: RDX (1,3,5-trinitroperhydro-1,3,5-triazine), β-HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine), TATP (triacetone triperoxide), HMTD (hexamethylene triperoxide diamine), TNT (2,4,6-trinitrotoluene), and PETN (pentaerythritol tetranitrate). The results were compared to experimental UV-vis absorption spectra collected in acetonitrile. Four computational methods were tested including: B3LYP, CAM-B3LYP, ωB97XD, and PBE0. PBE0 outperforms the other methods tested. Basis set effects on the electronic energies and oscillator strengths were evaluated with 6-31G(d), 6-31+G(d), 6-31+G(d,p), and 6-311+G(d,p). The minimal basis set required was 6-31+G(d); however, additional calculations were performed with 6-311+G(d,p). For each molecule studied, the natural transition orbitals (NTOs) were reported for the most prominent singlet excitations. The TD-DFT results have been combined with the IPv calculated by CBS-QB3 to construct energy level diagrams for the six compounds. The results suggest optimization approaches for fluorescence based detection methods for these explosives by guiding materials selections for optimal band alignment between fluorescent probe and explosive analyte. Also, the role of the TNT Meisenheimer complex formation and the resulting electronic structure thereof on of the quenching mechanism of II-VI semiconductors is discussed.

  13. A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de

    2012-07-10

    We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less

  14. Møller-Plesset perturbation energies and distances for HeC(20) extrapolated to the complete basis set limit.

    PubMed

    Varandas, A J C

    2009-02-01

    The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.

  15. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  16. Comprehensive simulation-enhanced training curriculum for an advanced minimally invasive procedure: a randomized controlled trial.

    PubMed

    Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P

    2017-05-01

    There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], P<.05) and superior nontechnical skills (41 [38-45] versus 31 [24-40], P<.01) compared with conventional training group. SET curriculum group and conventional training group demonstrated equivalent knowledge (14 [12-15] versus 13 [11-15], P = 0.47). SET curriculum group demonstrated equivalent psychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training for an advanced minimally invasive procedure and can ensure that comprehensive proficiency milestones are met before exposure to patient care. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  17. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  18. SplitRacer - a new Semi-Automatic Tool to Quantify And Interpret Teleseismic Shear-Wave Splitting

    NASA Astrophysics Data System (ADS)

    Reiss, M. C.; Rumpker, G.

    2017-12-01

    We have developed a semi-automatic, MATLAB-based GUI to combine standard seismological tasks such as the analysis and interpretation of teleseismic shear-wave splitting. Shear-wave splitting analysis is widely used to infer seismic anisotropy, which can be interpreted in terms of lattice-preferred orientation of mantle minerals, shape-preferred orientation caused by fluid-filled cracks or alternating layers. Seismic anisotropy provides a unique link between directly observable surface structures and the more elusive dynamic processes in the mantle below. Thus, resolving the seismic anisotropy of the lithosphere/asthenosphere is of particular importance for geodynamic modeling and interpretations. The increasing number of seismic stations from temporary experiments and permanent installations creates a new basis for comprehensive studies of seismic anisotropy world-wide. However, the increasingly large data sets pose new challenges for the rapid and reliably analysis of teleseismic waveforms and for the interpretation of the measurements. Well-established routines and programs are available but are often impractical for analyzing large data sets from hundreds of stations. Additionally, shear wave splitting results are seldom evaluated using the same well-defined quality criteria which may complicate comparison with results from different studies. SplitRacer has been designed to overcome these challenges by incorporation of the following processing steps: i) downloading of waveform data from multiple stations in mseed-format using FDSNWS tools; ii) automated initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold; iii) particle-motion analysis of selected phases at longer periods to detect and correct for sensor misalignment; iv) splitting analysis of selected phases based on transverse-energy minimization for multiple, randomly-selected, relevant time windows; v) one and two-layer joint-splitting analysis for all phases at one station by simultaneously minimizing their transverse energy - this includes the analysis of null measurements. vi) comparison of results with theoretical splitting parameters determined for one, two, or continuously-varying anisotropic layer(s). Examples for the application of SplitRacer will be presented.

  19. Human Subjects Protection and Technology in Prevention Science: Selected Opportunities and Challenges.

    PubMed

    Pisani, Anthony R; Wyman, Peter A; Mohr, David C; Perrino, Tatiana; Gallo, Carlos; Villamar, Juan; Kendziora, Kimberly; Howe, George W; Sloboda, Zili; Brown, C Hendricks

    2016-08-01

    Internet-connected devices are changing the way people live, work, and relate to one another. For prevention scientists, technological advances create opportunities to promote the welfare of human subjects and society. The challenge is to obtain the benefits while minimizing risks. In this article, we use the guiding principles for ethical human subjects research and proposed changes to the Common Rule regulations, as a basis for discussing selected opportunities and challenges that new technologies present for prevention science. The benefits of conducting research with new populations, and at new levels of integration into participants' daily lives, are presented along with five challenges along with technological and other solutions to strengthen the protections that we provide: (1) achieving adequate informed consent with procedures that are acceptable to participants in a digital age; (2) balancing opportunities for rapid development and broad reach, with gaining adequate understanding of population needs; (3) integrating data collection and intervention into participants' lives while minimizing intrusiveness and fatigue; (4) setting appropriate expectations for responding to safety and suicide concerns; and (5) safeguarding newly available streams of sensitive data. Our goal is to promote collaboration between prevention scientists, institutional review boards, and community members to safely and ethically harness advancing technologies to strengthen impact of prevention science.

  20. Preparation and handling of powdered infant formula: a commentary by the ESPGHAN Committee on Nutrition.

    PubMed

    Agostoni, Carlo; Axelsson, Irene; Goulet, Olivier; Koletzko, Berthold; Michaelsen, Kim F; Puntis, John W L; Rigo, Jacques; Shamir, Raanan; Szajewska, Hania; Turck, Dominique; Vandenplas, Yvan; Weaver, Lawrence T

    2004-10-01

    Powdered infant formulae are not sterile and may contain pathogenic bacteria. In addition, milk products are excellent media for bacterial proliferation. Multiplication of Enterobacter sakazakii in prepared formula feeds can cause devastating sepsis, particularly in the first 2 months of life. In approximately 50 published case reports of severe infection, there are high rates of meningitis, brain abscesses and necrotizing enterocolitis, with an overall mortality from 33% to 80%. Breast feeding provides effective protection against infection, one of the many reasons why it deserves continued promotion and support. To minimize the risk of infection in infants not fully breastfed, recommendations are made for preparation and handling of powdered formulae for children younger than 2 months of age. In the home setting, powdered infant formulae should be freshly prepared for each feed. Any milk remaining should be discarded rather than used in the following feed. Infant feeds should never be kept warm in bottle heaters or thermoses. In hospitals and other institutions written guidelines for preparation and handling of infant formulae should be established and their implementation monitored. If formula needs to be prepared in advance, it should be prepared on a daily basis and kept at 4 degrees C or below. Manufacturers of infant formulae should make every effort to minimize bacterial contamination of powdered products.

  1. Minimally invasive treatment of cholecysto-choledocal lithiasis: The point of view of the surgical endoscopist.

    PubMed

    De Palma, Giovanni D

    2013-06-27

    The rate of choledocholithiasis in patients with symptomatic cholelithiasis is estimated to be approximately 10%-33%, depending on the patient's age. Development of Endoscopic Retrograde Cholangiopancreatography and Laparoscopic Surgery and improvement of diagnostic procedures have influenced new approaches to the management of common bile duct stones in association with gallstones. At present available minimally-invasive treatments of cholecysto-choledocal lithiasis include: single-stage laparoscopic treatment, perioperative endoscopic treatment and endoscopic treatment alone. Published data evidence that, associated endoscopic-laparoscopic approach necessitates increased number of procedures per patient while single-stage laparoscopic treatment is associated with a shorter hospital stay. However, current data does not suggest clear superiority of any one approach with regard to success, mortality, morbidity and cost-effectiveness. Considering the variety of therapeutic options available for management, a critical appraisal and decision-making is required. Endoscopic retrograde cholangiopancreatography/EST should be adopted on a selective basis, i.e., in patients with acute obstructive suppurative cholangitis, severe biliary pancreatitis, ampullary stone impaction or severe comorbidity. In a setting where all facilities are available, decision in the selection of the therapeutic option depends on the patients, the number and size of choledocholithiasis stones, the anatomy of the cystic duct and common bile duct, the surgical history of patients and local expertise.

  2. Prioritization of influenza pandemic vaccination to minimize years of life lost.

    PubMed

    Miller, Mark A; Viboud, Cecile; Olson, Donald R; Grais, Rebecca F; Rabaa, Maia A; Simonsen, Lone

    2008-08-01

    How to allocate limited vaccine supplies in the event of an influenza pandemic is currently under debate. Conventional vaccination strategies focus on those at highest risk for severe outcomes, including seniors, but do not consider (1) the signature pandemic pattern in which mortality risk is shifted to younger ages, (2) likely reduced vaccine response in seniors, and (3) differences in remaining years of life with age. We integrated these factors to project the age-specific years of life lost (YLL) and saved in a future pandemic, on the basis of mortality patterns from 3 historical pandemics, age-specific vaccine efficacy, and the 2000 US population structure. For a 1918-like scenario, the absolute mortality risk is highest in people <45 years old; in contrast, seniors (those >or=65 years old) have the highest mortality risk in the 1957 and 1968 scenarios. The greatest YLL savings would be achieved by targeting different age groups in each scenario; people <45 years old in the 1918 scenario, people 45-64 years old in the 1968 scenario, and people >45 years old in the 1957 scenario. Our findings shift the focus of pandemic vaccination strategies onto younger populations and illustrate the need for real-time surveillance of mortality patterns in a future pandemic. Flexible setting of vaccination priority is essential to minimize mortality.

  3. Correlation consistent basis sets for actinides. I. The Th and U atoms.

    PubMed

    Peterson, Kirk A

    2015-02-21

    New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc - pV nZ - PP and cc - pV nZ - DK3, as well as outer-core correlation (valence + 5s5p5d), cc - pwCV nZ - PP and cc - pwCV nZ - DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThFn (n = 2 - 4), ThO2, and UFn (n = 4 - 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF4, ThF3, ThF2, and ThO2 are all within their experimental uncertainties. Bond dissociation energies of ThF4 and ThF3, as well as UF6 and UF5, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF4 and ThO2. The DKH3 atomization energy of ThO2 was calculated to be smaller than the DKH2 value by ∼1 kcal/mol.

  4. On the basis set convergence of electron–electron entanglement measures: helium-like systems

    PubMed Central

    Hofer, Thomas S.

    2013-01-01

    A systematic investigation of three different electron–electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li+ and Be2+ using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one–electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li+ and Be2+. In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used. PMID:24790952

  5. On the basis set convergence of electron-electron entanglement measures: helium-like systems.

    PubMed

    Hofer, Thomas S

    2013-01-01

    A systematic investigation of three different electron-electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li(+) and Be(2+) using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one-electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li(+) and Be(2+). In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used.

  6. Orbital-Dependent Density Functionals for Chemical Catalysis

    DTIC Science & Technology

    2014-10-17

    noncollinear density functional theory to show that the low-spin state of Mn3 in a model of the oxygen -evolving complex of photosystem II avoids...DK, which denotes the cc-pV5Z-DK basis set for 3d metals and hydrogen and the ma-cc- pV5Z-DK basis set for oxygen ) and to nonrelativistic all...cc-pV5Z basis set for oxygen ). As compared to NCBS-DK results, all ECP calculations perform worse than def2-TZVP all-electron relativistic

  7. Electric dipole moment of diatomic molecules by configuration interaction. IV.

    NASA Technical Reports Server (NTRS)

    Green, S.

    1972-01-01

    The theory of basis set dependence in configuration interaction calculations is discussed, taking into account a perturbation model which is valid for small changes in the self-consistent field orbitals. It is found that basis set corrections are essentially additive through first order. It is shown that an error found in a previously published dipole moment calculation by Green (1972) for the metastable first excited state of CO was indeed due to an inadequate basis set as claimed.

  8. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2007-01-01

    In this work, we present a new set of basis functions, de ned over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also de ned over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

  9. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present a new set of basis functions, defined over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also defined over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

  10. Limestone and Silica Powder Replacements for Cement: Early-Age Performance.

    PubMed

    Bentz, Dale P; Ferraris, Chiara F; Jones, Scott Z; Lootens, Didier; Zunino, Franco

    2017-04-01

    Developing functional concrete mixtures with less ordinary portland cement (OPC) has been one of the key objectives of the 21 st century sustainability movement. While the supplies of many alternatives to OPC (such as fly ash or slag) may be limited, those of limestone and silica powders produced by crushing rocks seem virtually endless. The present study examines the chemical and physical influences of these powders on the rheology, hydration, and setting of cement-based materials via experiments and three-dimensional microstructural modeling. It is shown that both limestone and silica particle surfaces are active templates (sites) for the nucleation and growth of cement hydration products, while the limestone itself is also somewhat soluble, leading to the formation of carboaluminate hydration products. Because the filler particles are incorporated as active members of the percolated backbone that constitutes initial setting of a cement-based system, replacements of up to 50 % of the OPC by either of these powders on a volumetric basis have minimal impact on the initial setting time, and even a paste with only 5 % OPC and 95 % limestone powder by volume achieves initial set within 24 h. While their influence on setting is similar, the limestone and silica powders produce pastes with quite different rheological properties, when substituted at the same volume level. When proceeding from setting to later age strength development, one must also consider the dilution of the system due to cement removal, along with the solubility/reactivity of the filler. However, for applications where controlled (prompt) setting is more critical than developing high strengths, such as mortar tile adhesives, grouts, and renderings, significant levels of these powder replacements for cement can serve as sustainable, functional alternatives to the oft-employed 100 % OPC products.

  11. Advanced Interactive Display Formats for Terminal Area Traffic Control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Shaviv, G. E.

    1999-01-01

    This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.

  12. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  13. Resolving the paradigm crisis in intravenous iron and erythropoietin management.

    PubMed

    Besarab, A

    2006-05-01

    Despite the proven benefits of intravenous (i.v.) iron therapy in anemia management, it remains underutilized in the hemodialysis population. Although overall i.v. iron usage continues to increase slowly, monthly usage statistics compiled by the US Renal Data System suggest that clinicians are not implementing continued dosing regimens following repletion of iron stores. Continued therapy with i.v. iron represents a key opportunity to improve patient outcomes and increase the efficiency of anemia treatment. Regular administration of low doses of i.v. iron prevents the recurrence of iron deficiency, enhances response to recombinant human erythropoietin therapy, minimizes fluctuation of hemoglobin levels, hematocrit levels, and iron stores, and may reduce overall costs of care. This article reviews the importance of i.v. iron dosing on a regular basis in the hemodialysis patient with iron-deficiency anemia and explores reasons why some clinicians may still be reluctant to employ these protocols in the hemodialysis setting.

  14. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  15. Rapid radiofrequency field mapping in vivo using single-shot STEAM MRI.

    PubMed

    Helms, Gunther; Finsterbusch, Jürgen; Weiskopf, Nikolaus; Dechent, Peter

    2008-09-01

    Higher field strengths entail less homogeneous RF fields. This may influence quantitative MRI and MRS. A method for rapidly mapping the RF field in the human head with minimal distortion was developed on the basis of a single-shot stimulated echo acquisition mode (STEAM) sequence. The flip angle of the second RF pulse in the STEAM preparation was set to 60 degrees and 100 degrees instead of 90 degrees , inducing a flip angle-dependent signal change. A quadratic approximation of this trigonometric signal dependence together with a calibration accounting for slice excitation-related bias allowed for directly determining the RF field from the two measurements only. RF maps down to the level of the medulla could be obtained in less than 1 min and registered to anatomical volumes by means of the T(2)-weighted STEAM images. Flip angles between 75% and 125% of the nominal value were measured in line with other methods.

  16. The origin of cellular life.

    PubMed

    Ingber, D E

    2000-12-01

    This essay presents a scenario of the origin of life that is based on analysis of biological architecture and mechanical design at the microstructural level. My thesis is that the same architectural and energetic constraints that shape cells today also guided the evolution of the first cells and that the molecular scaffolds that support solid-phase biochemistry in modern cells represent living microfossils of past life forms. This concept emerged from the discovery that cells mechanically stabilize themselves using tensegrity architecture and that these same building rules guide hierarchical self-assembly at all size scales (Sci. Amer 278:48-57;1998). When combined with other fundamental design principles (e.g., energy minimization, topological constraints, structural hierarchies, autocatalytic sets, solid-state biochemistry), tensegrity provides a physical basis to explain how atomic and molecular elements progressively self-assembled to create hierarchical structures with increasingly complex functions, including living cells that can self-reproduce.

  17. The origin of cellular life

    NASA Technical Reports Server (NTRS)

    Ingber, D. E.

    2000-01-01

    This essay presents a scenario of the origin of life that is based on analysis of biological architecture and mechanical design at the microstructural level. My thesis is that the same architectural and energetic constraints that shape cells today also guided the evolution of the first cells and that the molecular scaffolds that support solid-phase biochemistry in modern cells represent living microfossils of past life forms. This concept emerged from the discovery that cells mechanically stabilize themselves using tensegrity architecture and that these same building rules guide hierarchical self-assembly at all size scales (Sci. Amer 278:48-57;1998). When combined with other fundamental design principles (e.g., energy minimization, topological constraints, structural hierarchies, autocatalytic sets, solid-state biochemistry), tensegrity provides a physical basis to explain how atomic and molecular elements progressively self-assembled to create hierarchical structures with increasingly complex functions, including living cells that can self-reproduce.

  18. Casemix Funding Optimisation: Working Together to Make the Most of Every Episode.

    PubMed

    Uzkuraitis, Carly; Hastings, Karen; Torney, Belinda

    2010-10-01

    Eastern Health, a large public Victorian Healthcare network, conducted a WIES optimisation audit across the casemix-funded sites for separations in the 2009/2010 financial year. The audit was conducted using existing staff resources and resulted in a significant increase in casemix funding at a minimal cost. The audit showcased the skill set of existing staff and resulted in enormous benefits to the coding and casemix team by demonstrating the value of the combination of skills that makes clinical coders unique. The development of an internal web-based application allowed accurate and timely reporting of the audit results, providing the basis for a restructure of the coding and casemix service, along with approval for additional staffing resources and inclusion of a regular auditing program to focus on the creation of high quality data for research, health services management and financial reimbursement.

  19. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  20. [Dynamics of the dialogue on bioethics in a Spain in transition].

    PubMed

    Abel, F

    1990-01-01

    The bioethics dialogue began in Spain in 1975 in private institutions and developed in a society in transition toward democracy. Nostalgia for a nationalist Catholicism by some and the fervor of others to demonstrate that a break with the past had taken place have been important factors in bioethics legislation. Imitation of legislation considered progressive prevailed in the debate taking place in the country's bioethics centers, although in the case of assisted reproduction a commission of experts was set up to advise the government. The public has not participated in the debates, despite their coverage by the communications media. The medical schools have attempted to reform the deontological codes as a basis for formulating, promoting, and protecting the values of a pluralistic society. Results have been minimal, but the work of the bioethics centers is gradually being recognized and evaluated, and it is hoped that this ongoing bioethical dialogue will gradually mature.

  1. The structure of a thermophilic kinase shapes fitness upon random circular permutation

    PubMed Central

    Jones, Alicia M.; Mehta, Manan M.; Thomas, Emily E.; Atkinson, Joshua T.; Segall-Shapiro, Thomas H.; Liu, Shirley; Silberg, Jonathan J.

    2016-01-01

    Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement where native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein’s functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AK with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and they reveal a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection. PMID:26976658

  2. The Structure of a Thermophilic Kinase Shapes Fitness upon Random Circular Permutation.

    PubMed

    Jones, Alicia M; Mehta, Manan M; Thomas, Emily E; Atkinson, Joshua T; Segall-Shapiro, Thomas H; Liu, Shirley; Silberg, Jonathan J

    2016-05-20

    Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement in which native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein's functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AKs with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and it reveals a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection.

  3. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data.

    PubMed

    Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O

    2015-08-25

    Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.

  4. Atomic Cholesky decompositions: a route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency.

    PubMed

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-21

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  5. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  6. The forecast for RAC extrapolation: mostly cloudy.

    PubMed

    Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie

    2011-09-01

    The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.

  7. Systematically convergent basis sets for transition metals. I. All-electron correlation consistent basis sets for the 3d elements Sc-Zn

    NASA Astrophysics Data System (ADS)

    Balabanov, Nikolai B.; Peterson, Kirk A.

    2005-08-01

    Sequences of basis sets that systematically converge towards the complete basis set (CBS) limit have been developed for the first-row transition metal elements Sc-Zn. Two families of basis sets, nonrelativistic and Douglas-Kroll-Hess (-DK) relativistic, are presented that range in quality from triple-ζ to quintuple-ζ. Separate sets are developed for the description of valence (3d4s) electron correlation (cc-pVnZ and cc-pVnZ-DK; n =T,Q, 5) and valence plus outer-core (3s3p3d4s) correlation (cc-pwCVnZ and cc-pwCVnZ-DK; n =T,Q, 5), as well as these sets augmented by additional diffuse functions for the description of negative ions and weak interactions (aug-cc-pVnZ and aug-cc-pVnZ-DK). Extensive benchmark calculations at the coupled cluster level of theory are presented for atomic excitation energies, ionization potentials, and electron affinities, as well as molecular calculations on selected hydrides (TiH, MnH, CuH) and other diatomics (TiF, Cu2). In addition to observing systematic convergence towards the CBS limits, both 3s3p electron correlation and scalar relativity are calculated to strongly impact many of the atomic and molecular properties investigated for these first-row transition metal species.

  8. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements

    NASA Astrophysics Data System (ADS)

    Hill, J. Grant; Peterson, Kirk A.

    2017-12-01

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  9. Spatial chaos of Wang tiles with two symbols

    NASA Astrophysics Data System (ADS)

    Chen, Jin-Yu; Chen, Yu-Jie; Hu, Wen-Guei; Lin, Song-Sun

    2016-02-01

    This investigation completely classifies the spatial chaos problem in plane edge coloring (Wang tiles) with two symbols. For a set of Wang tiles B , spatial chaos occurs when the spatial entropy h ( B ) is positive. B is called a minimal cycle generator if P ( B ) ≠ 0̸ and P ( B ' ) = 0̸ whenever B ' ⫋ B , where P ( B ) is the set of all periodic patterns on ℤ2 generated by B . Given a set of Wang tiles B , write B = C 1 ∪ C 2 ∪ ⋯ ∪ C k ∪ N , where Cj, 1 ≤ j ≤ k, are minimal cycle generators and B contains no minimal cycle generator except those contained in C1∪C2∪⋯∪Ck. Then, the positivity of spatial entropy h ( B ) is completely determined by C1∪C2∪⋯∪Ck. Furthermore, there are 39 equivalence classes of marginal positive-entropy sets of Wang tiles and 18 equivalence classes of saturated zero-entropy sets of Wang tiles. For a set of Wang tiles B , h ( B ) is positive if and only if B contains a MPE set, and h ( B ) is zero if and only if B is a subset of a SZE set.

  10. Ab initio calculation of reaction energies. III. Basis set dependence of relative energies on the FH2 and H2CO potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Frisch, Michael J.; Binkley, J. Stephen; Schaefer, Henry F., III

    1984-08-01

    The relative energies of the stationary points on the FH2 and H2CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Møller-Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H2→FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol-1 of the experimental value using the largest basis set considered. The qualitative features of the H2CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended.

  11. Calculations of molecular multipole electric moments of a series of exo-insaturated four-membered heterocycles, Y = CCH2CH2X

    NASA Astrophysics Data System (ADS)

    Romero, Angel H.

    2017-10-01

    The influence of ring puckering angle on the multipole moments of sixteen four-membered heterocycles (1-16) was theoretically estimated using MP2 and different DFTs in combination with the 6-31+G(d,p) basis set. To obtain an accurate evaluation, CCSD/cc-pVDZ level and, the MP2 and PBE1PBE methods in combination with the aug-cc-pVDZ and aug-cc-pVTZ basis sets were performed on the planar geometries of 1-16. In general, the DFT and MP2 approaches provided an identical dependence of the electrical properties with the puckering angle for 1-16. Quantitatively, the quality of the level of theory and basis sets affects significant the predictions of the multipole moments, in particular for the heterocycles containing C=O and C=S bonds. Convergence basis sets within the MP2 and PBE1PBE approximations are reached in the dipole moment calculations when the aug-cc-pVTZ basis set is used, while the quadrupole and octupole moment computations require a larger basis set than aug-cc-pVTZ. On the other hand, the multipole moments showed a strong dependence with the molecular geometry and the nature of the carbon-heteroatom bonds. Specifically, the C-X bond determines the behavior of the μ(ϕ), θ(ϕ) and Ώ(ϕ) functions, while the C=Y bond plays an important role in the magnitude of the studied properties.

  12. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  13. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    PubMed

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  14. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  15. Risk-adjusted predictive models of mortality after index arterial operations using a minimal data set.

    PubMed

    Prytherch, D R; Ridler, B M F; Ashley, S

    2005-06-01

    Reducing the data required for a national vascular database (NVD) without compromising the statistical basis of comparative audit is an important goal. This work attempted to model outcomes (mortality and morbidity) from a small and simple subset of the NVD data items, specifically urea, sodium, potassium, haemoglobin, white cell count, age and mode of admission. Logistic regression models of risk of adverse outcome were built from the 2001 submission to the NVD using all records that contained the complete data required by the models. These models were applied prospectively against the equivalent data from the 2002 submission to the NVD. As had previously been found using the P-POSSUM (Portsmouth POSSUM) approach, although elective abdominal aortic aneurysm (AAA) repair and infrainguinal bypass (IIB) operations could be described by the same model, separate models were required for carotid endarterectomy (CEA) and emergency AAA repair. For CEA there were insufficient adverse events recorded to allow prospective testing of the models. The overall mean predicted risk of death in 530 patients undergoing elective AAA repair or IIB operations was 5.6 per cent, predicting 30 deaths. There were 28 reported deaths (chi(2) = 2.75, 4 d.f., P = 0.600; no evidence of lack of fit). Similarly, accurate predictions were obtained across a range of predicted risks as well as for patients undergoing repair of ruptured AAA and for morbidity. A 'data economic' model for risk stratification of national data is feasible. The ability to use a minimal data set may facilitate the process of comparative audit within the NVD. Copyright (c) 2005 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  16. An Energy-Based Three-Dimensional Segmentation Approach for the Quantitative Interpretation of Electron Tomograms

    PubMed Central

    Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram

    2006-01-01

    Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467

  17. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  19. Quantum Mechanical Calculations of Monoxides of Silicon Carbide Molecules

    DTIC Science & Technology

    2003-03-01

    Data for CO Final Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -112.6850703739 2.02121 -1 2 DVZ...Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -363.7341927429 0.617643 -1 2 DVZ -363.7114852831 0 3 DVZ...Input Geometry Output Geometry Basis Set Final Energy (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE -1 2 O-C-Si Linear O-C-Si Linear DZV -401.5363

  20. Relativistic well-tempered Gaussian basis sets for helium through mercury. Breit interaction included

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, S.; Shinada, M.; Matsuoka, O.

    1990-10-01

    A systematic calculation of new relativistic Gaussian basis sets is reported. The new basis sets are similar to the previously reported ones (J. Chem. Phys. {bold 91}, 4193 (1989)), but, in the calculation, the Breit interaction has been explicitly included besides the Dirac--Coulomb Hamiltonian. They have been adopted for the calculation of the self-consistent field effect on the Breit interaction energies and are expected to be useful for the studies on higher-order effects such as the electron correlations and other quantum electrodynamical effects.

  1. Parallel Douglas-Kroll Energy and Gradients in NWChem. Estimating Scalar Relativistic Effects Using Douglas-Kroll Contracted Basis Sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Jong, Wibe A.; Harrison, Robert J.; Dixon, David A.

    A parallel implementation of the spin-free one-electron Douglas-Kroll(-Hess) Hamiltonian (DKH) in NWChem is discussed. An efficient and accurate method to calculate DKH gradients is introduced. It is shown that the use of standard (non-relativistic) contracted basis set can produce erroneous results for elements beyond the first row elements. The generation of DKH contracted cc-pVXZ (X = D, T, Q, 5) basis sets for H, He, B - Ne, Al - Ar, and Ga - Br will be discussed.

  2. Simultaneous estimation of ramipril, acetylsalicylic acid and atorvastatin calcium by chemometrics assisted UV-spectrophotometric method in capsules.

    PubMed

    Sankar, A S Kamatchi; Vetrichelvan, Thangarasu; Venkappaya, Devashya

    2011-09-01

    In the present work, three different spectrophotometric methods for simultaneous estimation of ramipril, aspirin and atorvastatin calcium in raw materials and in formulations are described. Overlapped data was quantitatively resolved by using chemometric methods, viz. inverse least squares (ILS), principal component regression (PCR) and partial least squares (PLS). Calibrations were constructed using the absorption data matrix corresponding to the concentration data matrix. The linearity range was found to be 1-5, 10-50 and 2-10 μg mL-1 for ramipril, aspirin and atorvastatin calcium, respectively. The absorbance matrix was obtained by measuring the zero-order absorbance in the wavelength range between 210 and 320 nm. A training set design of the concentration data corresponding to the ramipril, aspirin and atorvastatin calcium mixtures was organized statistically to maximize the information content from the spectra and to minimize the error of multivariate calibrations. By applying the respective algorithms for PLS 1, PCR and ILS to the measured spectra of the calibration set, a suitable model was obtained. This model was selected on the basis of RMSECV and RMSEP values. The same was applied to the prediction set and capsule formulation. Mean recoveries of the commercial formulation set together with the figures of merit (calibration sensitivity, selectivity, limit of detection, limit of quantification and analytical sensitivity) were estimated. Validity of the proposed approaches was successfully assessed for analyses of drugs in the various prepared physical mixtures and formulations.

  3. Image restoration by the method of convex projections: part 1 theory.

    PubMed

    Youla, D C; Webb, H

    1982-01-01

    A projection operator onto a closed convex set in Hilbert space is one of the few examples of a nonlinear map that can be defined in simple abstract terms. Moreover, it minimizes distance and is nonexpansive, and therefore shares two of the more important properties of ordinary linear orthogonal projections onto closed linear manifolds. In this paper, we exploit the properties of these operators to develop several iterative algorithms for image restoration from partial data which permit any number of nonlinear constraints of a certain type to be subsumed automatically. Their common conceptual basis is as follows. Every known property of an original image f is envisaged as restricting it to lie in a well-defined closed convex set. Thus, m such properties place f in the intersection E(0) = E(i) of the corresponding closed convex sets E(1),E(2),...EE(m). Given only the projection operators PE(i) onto the individual E(i)'s, i = 1 --> m, we restore f by recursive means. Clearly, in this approach, the realization of the P(i)'s in a Hilbert space setting is one of the major synthesis problems. Section I describes the geometrical significance of the three main theorems in considerable detail, and most of the underlying ideas are illustrated with the aid of simple diagrams. Section II presents rules for the numerical implementation of 11 specific projection operators which are found to occur frequently in many signal-processing applications, and the Appendix contains proofs of all the major results.

  4. Setting and validating the pass/fail score for the NBDHE.

    PubMed

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  5. Amobarbital treatment of multiple personality. Use of structured video tape interviews as a basis for intensive psychotherapy.

    PubMed

    Hall, R C; LeCann, A F; Schoolar, J C

    1978-09-01

    The case of a 30-year-old woman with five distinct personalities is presented. The patient was treated, using a system of structured video taped sodium amobarbital interviews, in which areas to be explored were developed in psychotherapy. Tapes were played for the patient after each session. The taped material was used as the basis for psychotherapeutic investigation. The patient evidenced many of the features previously reported in cases of multiple personality, specifically: being the product of an unwanted pregnancy in a repressively rigid family; emotional distancing by one parent; strong sibling rivalry with an adopted sib; family history of mental illness; a traumatic first sexual experience (rape); a marriage to a maladjusted individual in an attempt to escape the parental home; a high internalized standard of performance and an inability to display anger or negative feelings toward the parents. In the course of treatment, the patient's personalties fused and she was able to accept each component as part of herself. No further fragmentation has occurred during the year following discharge. The therapy technique minimized dependency, and the possiblity of addiction to amobarbital interviews permitted more active patient therapy involvement, and set clear-cut goals and expectations for improvement before further amobarbital interviews could be conducted.

  6. Development of an item bank for the assessment of depression in persons with mental illnesses and physical diseases using Rasch analysis.

    PubMed

    Forkmann, Thomas; Boecker, Maren; Norra, Christine; Eberle, Nicole; Kircher, Tilo; Schauerte, Patrick; Mischke, Karl; Westhofen, Martin; Gauggel, Siegfried; Wirtz, Markus

    2009-05-01

    The calibration of item banks provides the basis for computerized adaptive testing that ensures high diagnostic precision and minimizes participants' test burden. The present study aimed at developing a new item bank that allows for assessing depression in persons with mental and persons with somatic diseases. The sample consisted of 161 participants treated for a depressive syndrome, and 206 participants with somatic illnesses (103 cardiologic, 103 otorhinolaryngologic; overall mean age = 44.1 years, SD =14.0; 44.7% women) to allow for validation of the item bank in both groups. Persons answered a pool of 182 depression items on a 5-point Likert scale. Evaluation of Rasch model fit (infit < 1.3), differential item functioning, dimensionality, local independence, item spread, item and person separation (>2.0), and reliability (>.80) resulted in a bank of 79 items with good psychometric properties. The bank provides items with a wide range of content coverage and may serve as a sound basis for computerized adaptive testing applications. It might also be useful for researchers who wish to develop new fixed-length scales for the assessment of depression in specific rehabilitation settings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  7. [Bacteriological quality of air in a ward for sterile pharmaceutical preparations].

    PubMed

    Caorsi P, Beatriz; Sakurada Z, Andrea; Ulloa F, M Teresa; Pezzani V, Marcela; Latorre O, Paz

    2011-02-01

    An extremely clean area is required for preparation of sterile pharmaceutical compounds, in compliance with international standards, to minimize the probability of microbial contamination. To evaluate the bacteriological quality of the air in the Sterile Pharmaceutical Preparation Unit of the University of Chile's Clinical Hospital and to set up alerts and action levels of bacterial growth. We studied eight representative sites of our Unit on a daily basis from January to February 2005 and twice a week from June 2005 to February 2006. We collected 839 samples of air by impact in the Petri dish. 474 (56.5%) samples were positive; 17 (3.5%) of them had an inappropriate bacterial growth (2% of total samples). The samples from sites 1 and 2 (big and small biosafety cabinets) were negative. The countertop and transfer area occasionally exceeded the bacterial growth limits. The most frequently isolated bacteria were coagulase-negative staphylococci, Micrococcus spp and Corynebacterium spp, from skin microbiota, and Bacillus spp, an environmental bacteria. From a microbiological perspective, the air quality in our sterile preparation unit complied with international standards. Setting institutional alerts and action levels and appropriately identifying bacteria in sensitive areas permits quantification of the microbial load and application of preventive measures.

  8. Nanosecond Absorption Spectroscopy of Hemoglobin: Elementary Processes in Kinetic Cooperativity

    NASA Astrophysics Data System (ADS)

    Hofrichter, James; Sommer, Joseph H.; Henry, Eric R.; Eaton, William A.

    1983-04-01

    A nanosecond absorption spectrometer has been used to measure the optical spectra of hemoglobin between 3 ns and 100 ms after photolysis of the CO complex. The data from a single experiment comprise a surface, defined by the time-ordered set of 50-100 spectra. Singular value decomposition is used to represent the observed spectra in terms of a minimal set of basis spectra and the time course of their amplitudes. Both CO rebinding and conformational changes are found to be multiphasic. Prior to the quaternary structural change, two relaxations are observed that are assigned to geminate recombination followed by a tertiary structural change. These relaxations are interpreted in terms of a kinetic model that points out their potential role in kinetic cooperativity. The rapid escape of CO from the heme pocket compared with the rate of rebinding observed for both R and T quaternary states shows that the quaternary structure controls the overall dissociation rate by changing the rate at which the Fe--CO bond is broken. A comparable description of the control of the overall association rates must await a more complete experimental description of the kinetics of the quaternary T state.

  9. The illusion of client-centred practice.

    PubMed

    Gupta, Jyothi; Taff, Steven D

    2015-07-01

    A critical analysis of occupational therapy practice in the corporate health care culture in a free market economy was undertaken to demonstrate incongruence with the profession's philosophical basis and espoused commitment to client-centred practice. The current practice of occupational therapy in the reimbursement-driven practice arena in the United States is incongruent with the profession's espoused philosophy and values of client-centred practice. Occupational therapy differentiates itself from medicine's expert model aimed at curing disease and remediating impairment, by its claim to client-centred practice focused on restoring health through occupational enablement. Practice focused on impairment and function is at odds with the profession's core tenet, occupation, and minimizes the lasting impact of interventions on health and well-being. The profession cannot unleash the therapeutic power of human occupation in settings where body systems and body functions are not occupation-ready at the requisite levels for occupational participation. Client-centred practice is best embodied by occupation-focused interventions in the natural environment of everyday living. Providing services that are impairment-focused in unfamiliar settings is not a good fit for client-centred practice, which is the unique, authentic, and sustainable orientation for the profession.

  10. Understanding the fast pyrolysis of lignin.

    PubMed

    Patwardhan, Pushkaraj R; Brown, Robert C; Shanks, Brent H

    2011-11-18

    In the present study, pyrolysis of corn stover lignin was investigated by using a micro-pyrolyzer coupled with a GC-MS/FID (FID=flame ionization detector). The system has pyrolysis-vapor residence times of 15-20 ms, thus providing a regime of minimal secondary reactions. The primary pyrolysis product distribution obtained from lignin is reported. Over 84 % mass balance and almost complete closure on carbon balance is achieved. In another set of experiments, the pyrolysis vapors emerging from the micro-pyrolyzer are condensed to obtain lignin-derived bio-oil. The chemical composition of the bio-oil is analyzed by using GC-MS and gel permeation chromatography techniques. The comparison between results of two sets of experiments indicates that monomeric compounds are the primary pyrolysis products of lignin, which recombine after primary pyrolysis to produce oligomeric compounds. Further, the effect of minerals (NaCl, KCl, MgCl(2), and CaCl(2)) and temperature on the primary pyrolysis product distribution is investigated. The study provides insights into the fundamental mechanisms of lignin pyrolysis and a basis for developing more descriptive models of biomass pyrolysis. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A combined MOIP-MCDA approach to building and screening atmospheric pollution control strategies in urban regions.

    PubMed

    Mavrotas, George; Ziomas, Ioannis C; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  12. Shallow infiltration processes at Yucca Mountain, Nevada : neutron logging data 1984-93

    USGS Publications Warehouse

    Flint, Lorraine E.; Flint, Alan L.

    1995-01-01

    To determine site suitability of Yucca Mountain, Nevada, as a potential high-level radioactive waste repository, a study was devised to characterize net infiltration. This study involves a detailed data set produced from 99 neutron boreholes that consisted of volumetric water-content readings with depth from 1984 through 1993 at Yucca Mountain. Boreholes were drilled with minimal disturbance to the surrounding soil or rock in order to best represent field conditions. Boreholes were located in topographic positions representing infiltration zones identified as ridgetops, sideslopes, terraces, and active channels. Through careful field calibration, neutron moisture logs, collected on a monthly basis and representing most of the areal locations at Yucca Mountain, illustrated that the depth of penetration of seasonal moisture, important for escaping loss to evapotranspiration, was influenced by several factors. It was increased (1) by thin soil cover, especially in locations where thin soil is underlain by fractured bedrock; (2) on ridgetops; and (3) during the winter when evapotranspiration is low and runoff is less frequent. This data set helps to provide a seasonal and areal distribution of changes in volumetric water content with which to assess hydrologic processes contributing to net infiltration.

  13. LOVD: easy creation of a locus-specific sequence variation database using an "LSDB-in-a-box" approach.

    PubMed

    Fokkema, Ivo F A C; den Dunnen, Johan T; Taschner, Peter E M

    2005-08-01

    The completion of the human genome project has initiated, as well as provided the basis for, the collection and study of all sequence variation between individuals. Direct access to up-to-date information on sequence variation is currently provided most efficiently through web-based, gene-centered, locus-specific databases (LSDBs). We have developed the Leiden Open (source) Variation Database (LOVD) software approaching the "LSDB-in-a-Box" idea for the easy creation and maintenance of a fully web-based gene sequence variation database. LOVD is platform-independent and uses PHP and MySQL open source software only. The basic gene-centered and modular design of the database follows the recommendations of the Human Genome Variation Society (HGVS) and focuses on the collection and display of DNA sequence variations. With minimal effort, the LOVD platform is extendable with clinical data. The open set-up should both facilitate and promote functional extension with scripts written by the community. The LOVD software is freely available from the Leiden Muscular Dystrophy pages (www.DMD.nl/LOVD/). To promote the use of LOVD, we currently offer curators the possibility to set up an LSDB on our Leiden server. (c) 2005 Wiley-Liss, Inc.

  14. A Combined MOIP-MCDA Approach to Building and Screening Atmospheric Pollution Control Strategies in Urban Regions

    NASA Astrophysics Data System (ADS)

    Mavrotas, George; Ziomas, Ioannis C.; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  15. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  16. Near Hartree-Fock quality GTO basis sets for the first- and third-row atoms

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1989-01-01

    Energy-optimized Gaussian-type-orbital (GTO) basis sets of accuracy approaching that of numerical Hartree-Fock computations are compiled for the elements of the first and third rows of the periodic table. The methods employed in calculating the sets are explained; the applicability of the sets to electronic-structure calculations is discussed; and the results are presented in tables and briefly characterized.

  17. Computational tests of quantum chemical models for excited and ionized states of molecules with phosphorus and sulfur atoms.

    PubMed

    Hahn, David K; RaghuVeer, Krishans; Ortiz, J V

    2014-05-15

    Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.

  18. No need for external orthogonality in subsystem density-functional theory.

    PubMed

    Unsleber, Jan P; Neugebauer, Johannes; Jacob, Christoph R

    2016-08-03

    Recent reports on the necessity of using externally orthogonal orbitals in subsystem density-functional theory (SDFT) [Annu. Rep. Comput. Chem., 8, 2012, 53; J. Phys. Chem. A, 118, 2014, 9182] are re-investigated. We show that in the basis-set limit, supermolecular Kohn-Sham-DFT (KS-DFT) densities can exactly be represented as a sum of subsystem densities, even if the subsystem orbitals are not externally orthogonal. This is illustrated using both an analytical example and in basis-set free numerical calculations for an atomic test case. We further show that even with finite basis sets, SDFT calculations using accurate reconstructed potentials can closely approach the supermolecular KS-DFT density, and that the deviations between SDFT and KS-DFT decrease as the basis-set limit is approached. Our results demonstrate that formally, there is no need to enforce external orthogonality in SDFT, even though this might be a useful strategy when developing projection-based DFT embedding schemes.

  19. Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.

    PubMed

    Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth

    2015-10-13

    We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.

  20. Core-core and core-valence correlation

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.

  1. Numerical judgments by chimpanzees (Pan troglodytes) in a token economy.

    PubMed

    Beran, Michael J; Evans, Theodore A; Hoyle, Daniel

    2011-04-01

    We presented four chimpanzees with a series of tasks that involved comparing two token sets or comparing a token set to a quantity of food. Selected tokens could be exchanged for food items on a one-to-one basis. Chimpanzees successfully selected the larger numerical set for comparisons of 1 to 5 items when both sets were visible and when sets were presented through one-by-one addition of tokens into two opaque containers. Two of four chimpanzees used the number of tokens and food items to guide responding in all conditions, rather than relying on token color, size, total amount, or duration of set presentation. These results demonstrate that judgments of simultaneous and sequential sets of stimuli are made by some chimpanzees on the basis of the numerousness of sets rather than other non-numerical dimensions. The tokens were treated as equivalent to food items on the basis of their numerousness, and the chimpanzees maximized reward by choosing the larger number of items in all situations.

  2. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  3. Correlation consistent basis sets for actinides. I. The Th and U atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Kirk A., E-mail: kipeters@wsu.edu

    New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Bothmore » series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2 value by ∼1 kcal/mol.« less

  4. Segmented all-electron Gaussian basis sets of double and triple zeta qualities for Fr, Ra, and Ac

    NASA Astrophysics Data System (ADS)

    Campos, C. T.; de Oliveira, A. Z.; Ferreira, I. B.; Jorge, F. E.; Martins, L. S. C.

    2017-05-01

    Segmented all-electron basis sets of valence double and triple zeta qualities plus polarization functions for the elements Fr, Ra, and Ac are generated using non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. The sets are augmented with diffuse functions with the purpose to describe appropriately the electrons far from the nuclei. At the DKH-B3LYP level, first atomic ionization energies and bond lengths, dissociation energies, and polarizabilities of a sample of diatomics are calculated. Comparison with theoretical and experimental data available in the literature is carried out. It is verified that despite the small sizes of the basis sets, they are yet reliable.

  5. Basis set and electron correlation effects on the polarizability and second hyperpolarizability of model open-shell π-conjugated systems

    NASA Astrophysics Data System (ADS)

    Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi

    2005-03-01

    The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.

  6. Rational Density Functional Selection Using Game Theory.

    PubMed

    McAnanama-Brereton, Suzanne; Waller, Mark P

    2018-01-22

    Theoretical chemistry has a paradox of choice due to the availability of a myriad of density functionals and basis sets. Traditionally, a particular density functional is chosen on the basis of the level of user expertise (i.e., subjective experiences). Herein we circumvent the user-centric selection procedure by describing a novel approach for objectively selecting a particular functional for a given application. We achieve this by employing game theory to identify optimal functional/basis set combinations. A three-player (accuracy, complexity, and similarity) game is devised, through which Nash equilibrium solutions can be obtained. This approach has the advantage that results can be systematically improved by enlarging the underlying knowledge base, and the deterministic selection procedure mathematically justifies the density functional and basis set selections.

  7. Assessment of multireference approaches to explicitly correlated full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart

    2016-08-07

    The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less

  8. Detailed Wave Function Analysis for Multireference Methods: Implementation in the Molcas Program Package and Applications to Tetracene.

    PubMed

    Plasser, Felix; Mewes, Stefanie A; Dreuw, Andreas; González, Leticia

    2017-11-14

    High-level multireference computations on electronically excited and charged states of tetracene are performed, and the results are analyzed using an extensive wave function analysis toolbox that has been newly implemented in the Molcas program package. Aside from verifying the strong effect of dynamic correlation, this study reveals an unexpected critical influence of the atomic orbital basis set. It is shown that different polarized double-ζ basis sets produce significantly different results for energies, densities, and overall wave functions, with the best performance obtained for the atomic natural orbital (ANO) basis set by Pierloot et al. Strikingly, the ANO basis set not only reproduces the energies but also performs exceptionally well in terms of describing the diffuseness of the different states and of their attachment/detachment densities. This study, thus, not only underlines the fact that diffuse basis functions are needed for an accurate description of the electronic wave functions but also shows that, at least for the present example, it is enough to include them implicitly in the contraction scheme.

  9. Application of multi-dimensional discrimination diagrams and probability calculations to Paleoproterozoic acid rocks from Brazilian cratons and provinces to infer tectonic settings

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeet K.; Oliveira, Elson P.

    2013-08-01

    In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).

  10. USER'S GUIDE: Strategic Waste Minimization Initiative (SWAMI) Version 2.0 - A Software Tool to Aid in Process Analysis for Pollution Prevention

    EPA Science Inventory

    The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...

  11. Determination of the Core of a Minimal Bacterial Gene Set†

    PubMed Central

    Gil, Rosario; Silva, Francisco J.; Peretó, Juli; Moya, Andrés

    2004-01-01

    The availability of a large number of complete genome sequences raises the question of how many genes are essential for cellular life. Trying to reconstruct the core of the protein-coding gene set for a hypothetical minimal bacterial cell, we have performed a computational comparative analysis of eight bacterial genomes. Six of the analyzed genomes are very small due to a dramatic genome size reduction process, while the other two, corresponding to free-living relatives, are larger. The available data from several systematic experimental approaches to define all the essential genes in some completely sequenced bacterial genomes were also considered, and a reconstruction of a minimal metabolic machinery necessary to sustain life was carried out. The proposed minimal genome contains 206 protein-coding genes with all the genetic information necessary for self-maintenance and reproduction in the presence of a full complement of essential nutrients and in the absence of environmental stress. The main features of such a minimal gene set, as well as the metabolic functions that must be present in the hypothetical minimal cell, are discussed. PMID:15353568

  12. Awake craniotomy for microsurgical obliteration of mycotic aneurysms: technical report of three cases.

    PubMed

    Lüders, Jürgen C; Steinmetz, Michael P; Mayberg, Marc R

    2005-01-01

    Infectious (mycotic) aneurysms that do not resolve with medical treatment require surgical obliteration, usually requiring sacrifice of the parent artery. In addition, patients with mycotic aneurysms frequently need subsequent cardiac valve repair, which often necessitates anticoagulation. Three cases of awake craniotomy for microsurgical clipping of mycotic aneurysms are presented. Awake minimally invasive craniotomy using frameless stereotactic guidance on the basis of computed tomographic angiography enables temporary occlusion of the parent artery with neurological assessment before obliteration of the aneurysm. A 56-year-old woman presented with progressively worsening mitral valve disease and a history of subacute bacterial endocarditis and subarachnoid hemorrhage 30 years previously. A cerebral angiogram revealed a 4-mm left middle cerebral artery (MCA) angular branch aneurysm, which required obliteration before mitral valve replacement. The second patient, a 64-year-old woman with a history of rheumatic fever, had an 8-mm right distal MCA aneurysm diagnosed in the setting of pulmonary abscess and worsening cardiac function as a result of mitral valve disease. The third patient, a 57-year-old man with a history of fevers, night sweats, and progressive mitral valve disease, had an enlarging left MCA angular branch aneurysm despite the administration of antibiotics. Because of their location on distal MCA branches, none of the aneurysms were amenable to preoperative test balloon occlusion. After undergoing stereotactic computed tomographic angiography with fiducial markers, the patients underwent a minimally invasive awake craniotomy with frameless stereotactic navigation. In all cases, the results of the neurological examination were unchanged during temporary parent artery occlusion and the aneurysms were successfully obliterated. Awake minimally invasive craniotomy for an infectious aneurysm located in eloquent brain enables awake testing before permanent clipping or vessel sacrifice. Combining frameless stereotactic navigation with computed tomographic angiography allowed us to perform the operation quickly through a small craniotomy with minimal exploration.

  13. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  14. Free time minimizers for the three-body problem

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  15. Inferring the Minimal Genome of Mesoplasma florum by Comparative Genomics and Transposon Mutagenesis.

    PubMed

    Baby, Vincent; Lachance, Jean-Christophe; Gagnon, Jules; Lucier, Jean-François; Matteau, Dominick; Knight, Tom; Rodrigue, Sébastien

    2018-01-01

    The creation and comparison of minimal genomes will help better define the most fundamental mechanisms supporting life. Mesoplasma florum is a near-minimal, fast-growing, nonpathogenic bacterium potentially amenable to genome reduction efforts. In a comparative genomic study of 13 M. florum strains, including 11 newly sequenced genomes, we have identified the core genome and open pangenome of this species. Our results show that all of the strains have approximately 80% of their gene content in common. Of the remaining 20%, 17% of the genes were found in multiple strains and 3% were unique to any given strain. On the basis of random transposon mutagenesis, we also estimated that ~290 out of 720 genes are essential for M. florum L1 in rich medium. We next evaluated different genome reduction scenarios for M. florum L1 by using gene conservation and essentiality data, as well as comparisons with the first working approximation of a minimal organism, Mycoplasma mycoides JCVI-syn3.0. Our results suggest that 409 of the 473 M. mycoides JCVI-syn3.0 genes have orthologs in M. florum L1. Conversely, 57 putatively essential M. florum L1 genes have no homolog in M. mycoides JCVI-syn3.0. This suggests differences in minimal genome compositions, even for these evolutionarily closely related bacteria. IMPORTANCE The last years have witnessed the development of whole-genome cloning and transplantation methods and the complete synthesis of entire chromosomes. Recently, the first minimal cell, Mycoplasma mycoides JCVI-syn3.0, was created. Despite these milestone achievements, several questions remain to be answered. For example, is the composition of minimal genomes virtually identical in phylogenetically related species? On the basis of comparative genomics and transposon mutagenesis, we investigated this question by using an alternative model, Mesoplasma florum, that is also amenable to genome reduction efforts. Our results suggest that the creation of additional minimal genomes could help reveal different gene compositions and strategies that can support life, even within closely related species.

  16. Evaluation of a rapid single multiplex microsatellite-based assay for use in forensic genetic investigations in dogs.

    PubMed

    Clark, Leigh Anne; Famula, Thomas R; Murphy, Keith E

    2004-10-01

    To develop a set of microsatellite markers, composed of a minimal number of these markers, suitable for use in forensic genetic investigations in dogs. Blood, tissue, or buccal epithelial cells from 364 dogs of 85 breeds and mixed breeds and 19 animals from related species in the family Canidae. 61 tetranucleotide microsatellite markers were characterized on the basis of number and size of alleles, ease of genotyping, chromosomal location, and ability to be coamplified. The range in allele size, number of alleles, total heterozygosity, and fixation index for each marker were determined by use of genotype data from 383 dogs and related species. Polymorphism information content was calculated for several breeds of dogs. 7 microsatellite markers could be coamplified. These markers were labeled with fluorescent dyes, multiplexed into a single reaction, and optimized for resolution in a commercial genetic analyzer. The multiplex set was used to identify sires for 2 mixed litters. The test was not species specific; genotype information collected for wolves, coyotes, jackals, New Guinea singing dogs, and an African wild dog could not distinguish between these species. This set of 7 microsatellite markers is useful in forensic applications (ie, identification of dogs and determination of parentage) in closely related animals and is applicable to a wide range of species belonging to the family Canidae.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graesser, Michael L.

    Here, a discovery of neutrinoless double-β decay would be profound, providing the first direct experimental evidence of ΔL = 2 lepton number violating processes. While a natural explanation is provided by an effective Majorana neutrino mass, other new physics interpretations should be carefully evaluated. At low-energies such new physics could man-ifest itself in the form of color and SU(2) L × U(1)Y invariant higher dimension operators. Here we determine a complete set of electroweak invariant dimension-9 operators, and our analysis supersedes those that only impose U(1) em invariance. Imposing electroweak invariance implies: 1) a significantly reduced set of leading ordermore » operators compared to only imposing U(1) em invariance; and 2) other collider signatures. Prior to imposing electroweak invariance we find a minimal basis of 24 dimension-9 operators, which is reduced to 11 electroweak invariant operators at leading order in the expansion in the Higgs vacuum expectation value. We set up a systematic analysis of the hadronic realization of the 4-quark operators using chiral perturbation theory, and apply it to determine which of these operators have long-distance pion enhancements at leading order in the chiral expansion. We also find at dimension-11 and dimension-13 the electroweak invariant operators that after electroweak symmetry breaking produce the remaining ΔL = 2 operators that would appear at dimension-9 if only U(1) em is imposed.« less

  18. Public health information in crisis-affected populations: a review of methods and their use for advocacy and action.

    PubMed

    Checchi, Francesco; Warsame, Abdihamid; Treacy-Wong, Victoria; Polonsky, Jonathan; van Ommeren, Mark; Prudhon, Claudine

    2017-11-18

    Valid and timely information about various domains of public health underpins the effectiveness of humanitarian public health interventions in crises. However, obstacles including insecurity, insufficient resources and skills for data collection and analysis, and absence of validated methods combine to hamper the quantity and quality of public health information available to humanitarian responders. This paper, the second in a Series of four papers, reviews available methods to collect public health data pertaining to different domains of health and health services in crisis settings, including population size and composition, exposure to armed attacks, sexual and gender-based violence, food security and feeding practices, nutritional status, physical and mental health outcomes, public health service availability, coverage and effectiveness, and mortality. The paper also quantifies the availability of a minimal essential set of information in large armed conflict and natural disaster crises since 2010: we show that information was available and timely only in a small minority of cases. On the basis of this observation, we propose an agenda for methodological research and steps required to improve on the current use of available methods. This proposition includes setting up a dedicated interagency service for public health information and epidemiology in crises. Copyright © 2017 World Health Organization. Published by Elsevier Ltd/Inc/BV. All rights reserved. Published by Elsevier Ltd.. All rights reserved.

  19. An algorithm for designing minimal microbial communities with desired metabolic capacities

    PubMed Central

    Eng, Alexander; Borenstein, Elhanan

    2016-01-01

    Motivation: Recent efforts to manipulate various microbial communities, such as fecal microbiota transplant and bioreactor systems’ optimization, suggest a promising route for microbial community engineering with numerous medical, environmental and industrial applications. However, such applications are currently restricted in scale and often rely on mimicking or enhancing natural communities, calling for the development of tools for designing synthetic communities with specific, tailored, desired metabolic capacities. Results: Here, we present a first step toward this goal, introducing a novel algorithm for identifying minimal sets of microbial species that collectively provide the enzymatic capacity required to synthesize a set of desired target product metabolites from a predefined set of available substrates. Our method integrates a graph theoretic representation of network flow with the set cover problem in an integer linear programming (ILP) framework to simultaneously identify possible metabolic paths from substrates to products while minimizing the number of species required to catalyze these metabolic reactions. We apply our algorithm to successfully identify minimal communities both in a set of simple toy problems and in more complex, realistic settings, and to investigate metabolic capacities in the gut microbiome. Our framework adds to the growing toolset for supporting informed microbial community engineering and for ultimately realizing the full potential of such engineering efforts. Availability and implementation: The algorithm source code, compilation, usage instructions and examples are available under a non-commercial research use only license at https://github.com/borenstein-lab/CoMiDA. Contact: elbo@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153571

  20. Improving the performance of minimizers and winnowing schemes

    PubMed Central

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-01-01

    Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970

  1. Nonlinear transient analysis by energy minimization: A theoretical basis for the ACTION computer code. [predicting the response of a lightweight aircraft during a crash

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1980-01-01

    The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.

  2. Intranets: virtual procedure manuals for the pathology lab.

    PubMed

    Ruby, S G; Krempel, G

    1998-08-01

    A novel system exists for replacing standard written operation manuals using a computerized PC-based peer-to-peer network. The system design is based on commonly available hardware and software and utilizes existing equipment to minimize implementation expense. The system is relatively easy to implement and maintain, involves minimal training, and should quickly become a financial asset. In addition, such a system can improve access to laboratory procedure manuals so that resources can be better used on a daily basis.

  3. Convergence of third order correlation energy in atoms and molecules.

    PubMed

    Kahn, Kalju; Granovsky, Alex A; Noga, Jozef

    2007-01-30

    We have investigated the convergence of third order correlation energy within the hierarchies of correlation consistent basis sets for helium, neon, and water, and for three stationary points of hydrogen peroxide. This analysis confirms that singlet pair energies converge much slower than triplet pair energies. In addition, singlet pair energies with (aug)-cc-pVDZ and (aug)-cc-pVTZ basis sets do not follow a converging trend and energies with three basis sets larger than aug-cc-pVTZ are generally required for reliable extrapolations of third order correlation energies, making so the explicitly correlated R12 calculations preferable.

  4. Nutrient profiles discriminate between foods according to their contribution to nutritionally adequate diets: a validation study using linear programming and the SAIN,LIM system.

    PubMed

    Darmon, Nicole; Vieux, Florent; Maillot, Matthieu; Volatier, Jean-Luc; Martin, Ambroise

    2009-04-01

    The nutrient profile concept implies that it is possible to discriminate between foods according to their contribution to a healthy diet on the basis of their nutrient contents only. The objective was to test the compatibility between nutrient profiling and nutrient-based recommendations by using diet modeling with linear programming. Food consumption data from the French "Individuelle et Nationale sur les Consommations Alimentaires" dietary survey and its associated food-composition database were used as input data. Each food was allocated to 1 of 4 classes, according to the SAIN,LIM system -- a nutrient profiling system based on 2 independent scores, including a total of 8 basic plus 4 optional nutrients. The possibility to model diets fulfilling a set of 40 nutrient recommendations (healthy models) was tested by using foods from a given nutrient profile class only or from a combination of classes. The possibility to fulfill a set of nutrient constraints in contradiction with the recommendations (unhealthy models) was also tested. For each model, the feasible energy range was assessed by minimizing and maximizing total energy content. With foods from the most favorable nutrient profile class, healthy diets could be modeled, but it was impossible to design unhealthy diets within a realistic range of energy intake with these foods. With foods from the least favorable class, unhealthy, but not healthy, diets could be designed. Both healthy and unhealthy diets could be designed with foods from intermediate classes. On the basis of a few key nutrients, it is possible to predict the ability of a given food to facilitate -- or to impair -- the fulfillment of a large number of nutrient recommendations.

  5. Ab initio rate constants from hyperspherical quantum scattering: Application to H+C2H6 and H+CH3OH

    NASA Astrophysics Data System (ADS)

    Kerkeni, Boutheïna; Clary, David C.

    2004-10-01

    The dynamics and kinetics of the abstraction reactions of H atoms with ethane and methanol have been studied using a quantum mechanical procedure. Bonds being broken and formed are treated with explicit hyperspherical quantum dynamics. The ab initio potential energy surfaces for these reactions have been developed from a minimal number of grid points (average of 48 points) and are given by analytical functionals. All the degrees of freedom except the breaking and forming bonds are optimized using the second order perturbation theory method with a correlation consistent polarized valence triple zeta basis set. Single point energies are calculated on the optimized geometries with the coupled cluster theory and the same basis set. The reaction of H with C2H6 is endothermic by 1.5 kcal/mol and has a vibrationally adiabatic barrier of 12 kcal/mol. The reaction of H with CH3OH presents two reactive channels: the methoxy and the hydroxymethyl channels. The former is endothermic by 0.24 kcal/mol and has a vibrationally adiabatic barrier of 13.29 kcal/mol, the latter reaction is exothermic by 7.87 kcal/mol and has a vibrationally adiabatic barrier of 8.56 kcal/mol. We report state-to-state and state-selected cross sections together with state-to-state rate constants for the title reactions. Thermal rate constants for these reactions exhibit large quantum tunneling effects when compared to conventional transition state theory results. For H+CH3OH, it is found that the CH2OH product is the dominant channel, and that the CH3O channel contributes just 2% at 500 K. For both reactions, rate constants are in good agreement with some measurements.

  6. Sensitivity of the Properties of Ruthenium “Blue Dimer” to Method, Basis Set, and Continuum Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozkanlar, Abdullah; Clark, Aurora E.

    2012-05-23

    The ruthenium “blue dimer” [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structuremore » of “blue dimer” using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.« less

  7. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-09-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V.M. Shabaev et al., Phys. Rev. Lett. 93 (2004) 130405] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely-employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37 (1988) 307-315]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s-7s transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach. In addition, we present a strategy for optimizing the size of the basis sets by choosing progressively smaller number of basis functions for increasingly higher partial waves. This strategy exploits suppression of contributions of high partial waves to typical many-body correlation corrections.

  8. Spin-orbit ZORA and four-component Dirac-Coulomb estimation of relativistic corrections to isotropic nuclear shieldings and chemical shifts of noble gas dimers.

    PubMed

    Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A

    2016-02-05

    Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.

  9. Comparison of one-particle basis set extrapolation to explicitly correlated methods for the calculation of accurate quartic force fields, vibrational frequencies, and spectroscopic constants: Application to H2O, N2H+, NO2+, and C2H2

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.

    2010-12-01

    One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.

  10. Traction patterns of tumor cells.

    PubMed

    Ambrosi, D; Duperray, A; Peschetola, V; Verdier, C

    2009-01-01

    The traction exerted by a cell on a planar deformable substrate can be indirectly obtained on the basis of the displacement field of the underlying layer. The usual methodology used to address this inverse problem is based on the exploitation of the Green tensor of the linear elasticity problem in a half space (Boussinesq problem), coupled with a minimization algorithm under force penalization. A possible alternative strategy is to exploit an adjoint equation, obtained on the basis of a suitable minimization requirement. The resulting system of coupled elliptic partial differential equations is applied here to determine the force field per unit surface generated by T24 tumor cells on a polyacrylamide substrate. The shear stress obtained by numerical integration provides quantitative insight of the traction field and is a promising tool to investigate the spatial pattern of force per unit surface generated in cell motion, particularly in the case of such cancer cells.

  11. Toward a W4-F12 approach: Can explicitly correlated and orbital-based ab initio CCSD(T) limits be reconciled?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylvetsky, Nitai, E-mail: gershom@weizmann.ac.il; Martin, Jan M. L., E-mail: gershom@weizmann.ac.il; Peterson, Kirk A., E-mail: kipeters@wsu.edu

    2016-06-07

    In the context of high-accuracy computational thermochemistry, the valence coupled cluster with all singles and doubles (CCSD) correlation component of molecular atomization energies presents the most severe basis set convergence problem, followed by the (T) component. In the present paper, we make a detailed comparison, for an expanded version of the W4-11 thermochemistry benchmark, between, on the one hand, orbital-based CCSD/AV{5,6}Z + d and CCSD/ACV{5,6}Z extrapolation, and on the other hand CCSD-F12b calculations with cc-pVQZ-F12 and cc-pV5Z-F12 basis sets. This latter basis set, now available for H–He, B–Ne, and Al–Ar, is shown to be very close to the basis setmore » limit. Apparent differences (which can reach 0.35 kcal/mol for systems like CCl{sub 4}) between orbital-based and CCSD-F12b basis set limits disappear if basis sets with additional radial flexibility, such as ACV{5,6}Z, are used for the orbital calculation. Counterpoise calculations reveal that, while total atomization energies with V5Z-F12 basis sets are nearly free of BSSE, orbital calculations have significant BSSE even with AV(6 + d)Z basis sets, leading to non-negligible differences between raw and counterpoise-corrected extrapolated limits. This latter problem is greatly reduced by switching to ACV{5,6}Z core-valence basis sets, or simply adding an additional zeta to just the valence orbitals. Previous reports that all-electron approaches like HEAT (high-accuracy extrapolated ab-initio thermochemistry) lead to different CCSD(T) limits than “valence limit + CV correction” approaches like Feller-Peterson-Dixon and Weizmann-4 (W4) theory can be rationalized in terms of the greater radial flexibility of core-valence basis sets. For (T) corrections, conventional CCSD(T)/AV{Q,5}Z + d calculations are found to be superior to scaled or extrapolated CCSD(T)-F12b calculations of similar cost. For a W4-F12 protocol, we recommend obtaining the Hartree-Fock and valence CCSD components from CCSD-F12b/cc-pV{Q,5}Z-F12 calculations, but the (T) component from conventional CCSD(T)/aug’-cc-pV{Q,5}Z + d calculations using Schwenke’s extrapolation; post-CCSD(T), core-valence, and relativistic corrections are to be obtained as in the original W4 theory. W4-F12 is found to agree slightly better than W4 with ATcT (active thermochemical tables) data, at a substantial saving in computation time and especially I/O overhead. A W4-F12 calculation on benzene is presented as a proof of concept.« less

  12. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  13. The Costs of Legislated Minimal Competency Requirements. A background paper prepared for the Minimal Cometency Workshops sponsored by the Education Commission of the States and the National Institute of Education.

    ERIC Educational Resources Information Center

    Anderson, Barry D.

    Little is known about the costs of setting up and implementing legislated minimal competency testing (MCT). To estimate the financial obstacles which lie between the idea and its implementation, MCT requirements are viewed from two perspectives. The first, government regulation, views legislated minimal competency requirements as an attempt by the…

  14. A converged calculation of the energy barrier to internal rotation in the ethylene-sulfur dioxide dimer

    NASA Astrophysics Data System (ADS)

    Resende, Stella M.; De Almeida, Wagner B.; van Duijneveldt-van de Rijdt, Jeanne G. C. M.; van Duijneveldt, Frans B.

    2001-08-01

    Geometrical parameters for the equilibrium (MIN) and lowest saddle-point (TS) geometries of the C2H4⋯SO2 dimer, and the corresponding binding energies, were calculated using the Hartree-Fock and correlated levels of ab initio theory, in basis sets ranging from the D95(d,p) double-zeta basis set to the aug-cc-pVQZ correlation consistent basis set. An assessment of the effect of the basis set superposition error (BSSE) on these results was made. The dissociation energy from the lowest vibrational state was estimated to be 705±100 cm-1 at the basis set limit, which is well within the range expected from experiment. The barrier to internal rotation was found to be 53±5 cm-1, slightly higher than the (revised) experimental result of 43 cm-1, probably due to zero-point vibrational effects. Our results clearly show that, in direct contrast with recent ideas, the BSSE correction affects differentially the MIN and TS binding energies and so has to be included in the calculation of small energy barriers such as that in the C2H4⋯SO2 dimer. Previous reports of positive MP2 frozen-core binding energies for this complex in basis D95(d,p) are confirmed. The anomalies are shown to be an artifact arising from an incorrect removal of virtual orbitals by the default frozen-core option in the GAUSSIAN program.

  15. Of Minima and Maxima: The Social Significance of Minimal Competency Testing and the Search for Educational Excellence.

    ERIC Educational Resources Information Center

    Ericson, David P.

    1984-01-01

    Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…

  16. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  17. Data-driven clustering of rain events: microphysics information derived from macro-scale observations

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    Rain time series records are generally studied using rainfall rate or accumulation parameters, which are estimated for a fixed duration (typically 1 min, 1 h or 1 day). In this study we use the concept of rain events. The aim of the first part of this paper is to establish a parsimonious characterization of rain events, using a minimal set of variables selected among those normally used for the characterization of these events. A methodology is proposed, based on the combined use of a genetic algorithm (GA) and self-organizing maps (SOMs). It can be advantageous to use an SOM, since it allows a high-dimensional data space to be mapped onto a two-dimensional space while preserving, in an unsupervised manner, most of the information contained in the initial space topology. The 2-D maps obtained in this way allow the relationships between variables to be determined and redundant variables to be removed, thus leading to a minimal subset of variables. We verify that such 2-D maps make it possible to determine the characteristics of all events, on the basis of only five features (the event duration, the peak rain rate, the rain event depth, the standard deviation of the rain rate event and the absolute rain rate variation of the order of 0.5). From this minimal subset of variables, hierarchical cluster analyses were carried out. We show that clustering into two classes allows the conventional convective and stratiform classes to be determined, whereas classification into five classes allows this convective-stratiform classification to be further refined. Finally, our study made it possible to reveal the presence of some specific relationships between these five classes and the microphysics of their associated rain events.

  18. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  19. Applicability of PM3 to transphosphorylation reaction path: Toward designing a minimal ribozyme

    NASA Technical Reports Server (NTRS)

    Manchester, John I.; Shibata, Masayuki; Setlik, Robert F.; Ornstein, Rick L.; Rein, Robert

    1993-01-01

    A growing body of evidence shows that RNA can catalyze many of the reactions necessary both for replication of genetic material and the possible transition into the modern protein-based world. However, contemporary ribozymes are too large to have self-assembled from a prebiotic oligonucleotide pool. Still, it is likely that the major features of the earliest ribozymes have been preserved as molecular fossils in the catalytic RNA of today. Therefore, the search for a minimal ribozyme has been aimed at finding the necessary structural features of a modern ribozyme (Beaudry and Joyce, 1990). Both a three-dimensional model and quantum chemical calculations are required to quantitatively determine the effects of structural features of the ribozyme on the reaction it catalyzes. Using this model, quantum chemical calculations must be performed to determine quantitatively the effects of structural features on catalysis. Previous studies of the reaction path have been conducted at the ab initio level, but these methods are limited to small models due to enormous computational requirements. Semiempirical methods have been applied to large systems in the past; however, the accuracy of these methods depends largely on a simple model of the ribozyme-catalyzed reaction, or hydrolysis of phosphoric acid. We find that the results are qualitatively similar to ab initio results using large basis sets. Therefore, PM3 is suitable for studying the reaction path of the ribozyme-catalyzed reaction.

  20. A dynamic parking charge optimal control model under perspective of commuters' evolutionary game behavior

    NASA Astrophysics Data System (ADS)

    Lin, XuXun; Yuan, PengCheng

    2018-01-01

    In this research we consider commuters' dynamic learning effect by modeling the trip mode choice behavior from a new perspective of dynamic evolutionary game theory. We explore the behavior pattern of different types of commuters and study the evolution path and equilibrium properties under different traffic conditions. We further establish a dynamic parking charge optimal control (referred to as DPCOC) model to alter commuters' trip mode choice while minimizing the total social cost. Numerical tests show. (1) Under fixed parking fee policy, the evolutionary results are completely decided by the travel time and the only method for public transit induction is to increase the parking charge price. (2) Compared with fixed parking fee policy, DPCOC policy proposed in this research has several advantages. Firstly, it can effectively turn the evolutionary path and evolutionary stable strategy to a better situation while minimizing the total social cost. Secondly, it can reduce the sensitivity of trip mode choice behavior to traffic congestion and improve the ability to resist interferences and emergencies. Thirdly, it is able to control the private car proportion to a stable state and make the trip behavior more predictable for the transportation management department. The research results can provide theoretical basis and decision-making references for commuters' mode choice prediction, dynamic setting of urban parking charge prices and public transit induction.

  1. TINKTEP: A fully self-consistent, mutually polarizable QM/MM approach based on the AMOEBA force field

    NASA Astrophysics Data System (ADS)

    Dziedzic, Jacek; Mao, Yuezhi; Shao, Yihan; Ponder, Jay; Head-Gordon, Teresa; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-09-01

    We present a novel quantum mechanical/molecular mechanics (QM/MM) approach in which a quantum subsystem is coupled to a classical subsystem described by the AMOEBA polarizable force field. Our approach permits mutual polarization between the QM and MM subsystems, effected through multipolar electrostatics. Self-consistency is achieved for both the QM and MM subsystems through a total energy minimization scheme. We provide an expression for the Hamiltonian of the coupled QM/MM system, which we minimize using gradient methods. The QM subsystem is described by the onetep linear-scaling DFT approach, which makes use of strictly localized orbitals expressed in a set of periodic sinc basis functions equivalent to plane waves. The MM subsystem is described by the multipolar, polarizable force field AMOEBA, as implemented in tinker. Distributed multipole analysis is used to obtain, on the fly, a classical representation of the QM subsystem in terms of atom-centered multipoles. This auxiliary representation is used for all polarization interactions between QM and MM, allowing us to treat them on the same footing as in AMOEBA. We validate our method in tests of solute-solvent interaction energies, for neutral and charged molecules, demonstrating the simultaneous optimization of the quantum and classical degrees of freedom. Encouragingly, we find that the inclusion of explicit polarization in the MM part of QM/MM improves the agreement with fully QM calculations.

  2. Continuous Shape Estimation of Continuum Robots Using X-ray Images

    PubMed Central

    Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960

  3. Continuous Shape Estimation of Continuum Robots Using X-ray Images.

    PubMed

    Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron

    2013-05-06

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.

  4. Stereochemical analysis of (+)-limonene using theoretical and experimental NMR and chiroptical data

    NASA Astrophysics Data System (ADS)

    Reinscheid, F.; Reinscheid, U. M.

    2016-02-01

    Using limonene as test molecule, the success and the limitations of three chiroptical methods (optical rotatory dispersion (ORD), electronic and vibrational circular dichroism, ECD and VCD) could be demonstrated. At quite low levels of theory (mpw1pw91/cc-pvdz, IEFPCM (integral equation formalism polarizable continuum model)) the experimental ORD values differ by less than 10 units from the calculated values. The modelling in the condensed phase still represents a challenge so that experimental NMR data were used to test for aggregation and solvent-solute interactions. After establishing a reasonable structural model, only the ECD spectra prediction showed a decisive dependence on the basis set: only augmented (in the case of Dunning's basis sets) or diffuse (in the case of Pople's basis sets) basis sets predicted the position and shape of the ECD bands correctly. Based on these result we propose a procedure to assign the absolute configuration (AC) of an unknown compound using the comparison between experimental and calculated chiroptical data.

  5. First-principles investigation on Rydberg and resonance excitations: A case study of the firefly luciferin anion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noguchi, Yoshifumi, E-mail: y.noguchi@issp.u-tokyo.ac.jp; Hiyama, Miyabi; Akiyama, Hidefumi

    2014-07-28

    The optical properties of an isolated firefly luciferin anion are investigated by using first-principles calculations, employing the many-body perturbation theory to take into account the excitonic effect. The calculated photoabsorption spectra are compared with the results obtained using the time-dependent density functional theory (TDDFT) employing the localized atomic orbital (AO) basis sets and a recent experiment in vacuum. The present method well reproduces the line shape at the photon energy corresponding to the Rydberg and resonance excitations but overestimates the peak positions by about 0.5 eV. However, the TDDFT-calculated positions of some peaks are closer to those of the experiment.more » We also investigate the basis set dependency in describing the free electron states above vacuum level and the excitons involving the transitions to the free electron states and conclude that AO-only basis sets are inaccurate for free electron states and the use of a plane wave basis set is required.« less

  6. Basis set study of classical rotor lattice dynamics.

    PubMed

    Witkoskie, James B; Wu, Jianlan; Cao, Jianshu

    2004-03-22

    The reorientational relaxation of molecular systems is important in many phenomenon and applications. In this paper, we explore the reorientational relaxation of a model Brownian rotor lattice system with short range interactions in both the high and low temperature regimes. In this study, we use a basis set expansion to capture collective motions of the system. The single particle basis set is used in the high temperature regime, while the spin wave basis is used in the low temperature regime. The equations of motion derived in this approach are analogous to the generalized Langevin equation, but the equations render flexibility by allowing nonequilibrium initial conditions. This calculation shows that the choice of projection operators in the generalized Langevin equation (GLE) approach corresponds to defining a specific inner-product space, and this inner-product space should be chosen to reveal the important physics of the problem. The basis set approach corresponds to an inner-product and projection operator that maintain the orthogonality of the spherical harmonics and provide a convenient platform for analyzing GLE expansions. The results compare favorably with numerical simulations, and the formalism is easily extended to more complex systems. (c) 2004 American Institute of Physics

  7. Improving the performance of minimizers and winnowing schemes.

    PubMed

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Theoretical study of the XP3 (X = Al, B, Ga) clusters

    NASA Astrophysics Data System (ADS)

    Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.

    2012-05-01

    The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.

  9. How to compute isomerization energies of organic molecules with quantum chemical methods.

    PubMed

    Grimme, Stefan; Steinmetz, Marc; Korth, Martin

    2007-03-16

    The reaction energies for 34 typical organic isomerizations including oxygen and nitrogen heteroatoms are investigated with modern quantum chemical methods that have the perspective of also being applicable to large systems. The experimental reaction enthalpies are corrected for vibrational and thermal effects, and the thus derived "experimental" reaction energies are compared to corresponding theoretical data. A series of standard AO basis sets in combination with second-order perturbation theory (MP2, SCS-MP2), conventional density functionals (e.g., PBE, TPSS, B3-LYP, MPW1K, BMK), and new perturbative functionals (B2-PLYP, mPW2-PLYP) are tested. In three cases, obvious errors of the experimental values could be detected, and accurate coupled-cluster [CCSD(T)] reference values have been used instead. It is found that only triple-zeta quality AO basis sets provide results close enough to the basis set limit and that sets like the popular 6-31G(d) should be avoided in accurate work. Augmentation of small basis sets with diffuse functions has a notable effect in B3-LYP calculations that is attributed to intramolecular basis set superposition error and covers basic deficiencies of the functional. The new methods based on perturbation theory (SCS-MP2, X2-PLYP) are found to be clearly superior to many other approaches; that is, they provide mean absolute deviations of less than 1.2 kcal mol-1 and only a few (<10%) outliers. The best performance in the group of conventional functionals is found for the highly parametrized BMK hybrid meta-GGA. Contrary to accepted opinion, hybrid density functionals offer no real advantage over simple GGAs. For reasonably large AO basis sets, results of poor quality are obtained with the popular B3-LYP functional that cannot be recommended for thermochemical applications in organic chemistry. The results of this study are complementary to often used benchmarks based on atomization energies and should guide chemists in their search for accurate and efficient computational thermochemistry methods.

  10. Comment on “Rethinking first-principles electron transport theories with projection operators: The problems caused by partitioning the basis set” [J. Chem. Phys. 139, 114104 (2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Mads, E-mail: mads.brandbyge@nanotech.dtu.dk

    2014-05-07

    In a recent paper Reuter and Harrison [J. Chem. Phys. 139, 114104 (2013)] question the widely used mean-field electron transport theories, which employ nonorthogonal localized basis sets. They claim these can violate an “implicit decoupling assumption,” leading to wrong results for the current, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent of whether or not the chosen basis set is nonorthogonal, andmore » that the current for a given basis set is consistent with divisions in real space. The ambiguity known from charge population analysis for nonorthogonal bases does not carry over to calculations of charge flux.« less

  11. Experience in Construction and Operation of the Distributed Information Systems on the Basis of the Z39.50 Protocol

    NASA Astrophysics Data System (ADS)

    Zhizhimov, Oleg; Mazov, Nikolay; Skibin, Sergey

    Questions concerned with construction and operation of the distributed information systems on the basis of ANSI/NISO Z39.50 Information Retrieval Protocol are discussed in the paper. The paper is based on authors' practice in developing ZooPARK server. Architecture of distributed information systems, questions of reliability of such systems, minimization of search time and administration are examined. Problems with developing of distributed information systems are also described.

  12. Minimal Risk in Pediatric Research: A Philosophical Review and Reconsideration

    PubMed Central

    Rossi, John; Nelson, Robert M.

    2017-01-01

    Despite more than thirty years of debate, disagreement persists among research ethicists about the most appropriate way to interpret the U.S. regulations on pediatric research, specifically the categories of “minimal risk” and a “minor increase over minimal risk.” Focusing primarily on the definition of “minimal risk,” we argue in this article that the continued debate about the pediatric risk categories is at least partly because their conceptual status is seldom considered directly. Once this is done, it becomes clear that the most popular strategy for interpreting “minimal risk”—defining it as a specific set of risks—is indefensible and, from a pragmatic perspective, unlikely to resolve disagreement. Primarily this is because judgments about minimal risk are both normative and heavily intuitive in nature and thus cannot easily be captured by reductions to a given set of risks. We suggest instead that a more defensible approach to evaluating risk should incorporate room for reflection and deliberation. This dispositional, deliberative framework can nonetheless accommodate a number of intellectual resources for reducing reliance on sheer intuition and improving the quality of risk evaluations. PMID:28777661

  13. LWS/SET Technology Experiment Carrier

    NASA Technical Reports Server (NTRS)

    Sherman, Barry; Giffin, Geoff

    2002-01-01

    This paper examines the approach taken to building a low-cost, modular spacecraft bus that can be used to support a variety of technology experiments in different space environments. It describes the techniques used and design drivers considered to ensure experiment independence from as yet selected host spacecraft. It describes the technology experiment carriers that will support NASA's Living With a Star Space Environment Testbed space missions. NASA has initiated the Living With a Star (LWS) Program to develop a better scientific understanding to address the aspects of the connected Sun-Earth system that affect life and society. A principal goal of the program is to bridge the gap between science, engineering, and user application communities. The Space Environment Testbed (SET) Project is one element of LWS. The Project will enable future science, operational, and commercial objectives in space and atmospheric environments by improving engineering approaches to the accommodation and/or mitigation of the effects of solar variability on technological systems. The SET Project is highly budget constrained and must seek to take advantage of as yet undetermined partnering opportunities for access to space. SET will conduct technology validation experiments hosted on available flight opportunities. The SET Testbeds will be developed in a manner that minimizes the requirements for accommodation, and will be flown as flight opportunities become available. To access the widest range of flight opportunities, two key development requirements are to maintain flexibility with respect to accommodation constraints and to have the capability to respond quickly to flight opportunities. Experiments, already developed to the technology readiness level of needing flight validation in the variable Sun-Earth environment, will be selected on the basis of the need for the subject technology, readiness for flight, need for flight resources and particular orbit. Experiments will be accumulated by the Project and manifested for specific flight opportunities as they become available. The SET Carrier is designed to present a standard set of interfaces to SET technology experiments and to be modular and flexible enough to interface to a variety of possible host spacecraft. The Carrier will have core components and mission unique components. Once the core carrier elements have been developed, only the mission unique components need to be defined and developed for any particular mission. This approach will minimize the mission specific cost and development schedule for a given flight opportunity. The standard set of interfaces provided by SET to experiments allows them to be developed independent of the particulars of a host spacecraft. The Carrier will provide the power, communication, and the necessary monitoring features to operate experiments. The Carrier will also provide all of the mechanical assemblies and harnesses required to adapt experiments to a particular host. Experiments may be hosted locally with the Carrier or remotely on the host spacecraft. The Carrier design will allow a single Carrier to support a variable number of experiments and will include features that support the ability to incrementally add experiments without disturbing the core architecture.

  14. Some considerations about Gaussian basis sets for electric property calculations

    NASA Astrophysics Data System (ADS)

    Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.

    Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.

  15. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. A Simplified Approach to the Basis Functions of Symmetry Operations and Terms of Metal Complexes in an Octahedral Field with d[superscript 1] to d[superscript 9] Configurations

    ERIC Educational Resources Information Center

    Lee, Liangshiu

    2010-01-01

    The basis sets for symmetry operations of d[superscript 1] to d[superscript 9] complexes in an octahedral field and the resulting terms are derived for the ground states and spin-allowed excited states. The basis sets are of fundamental importance in group theory. This work addresses such a fundamental issue, and the results are pedagogically…

  17. Rail Safety/Equipment Crashworthiness : Volume 1. A Systems Analysis of Injury Minimization in Rail Systems

    DOT National Transportation Integrated Search

    1978-07-01

    The Department of Transportation, Transportation Systems Center (TSC), is providing technical assistance to the Federal Railroad Administration (FRA) in a program to improve railroad safety and efficiency by providing a technological basis for improv...

  18. 40 CFR 60.54c - Siting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... consider air pollution control alternatives that minimize, on a site-specific basis, to the maximum extent..., as long as they include the consideration of air pollution control alternatives specified in....54c Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhov, Dmitry V.; Shalashilin, Dmitrii V.; Glover, William J.

    We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCEmore » allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.« less

  20. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  1. Application of quadratic optimization to supersonic inlet control

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1971-01-01

    The application of linear stochastic optimal control theory to the design of the control system for the air intake (inlet) of a supersonic air-breathing propulsion system is discussed. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant control systems are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain the best linear controller that minimizes the nonquadratic performance index. The two systems are compared on the basis of unstart prevention, control effort requirements, and sensitivity to parameter variations.

  2. Casual Set Approach to a Minimal Invariant Length

    NASA Astrophysics Data System (ADS)

    Raut, Usha

    2007-04-01

    Any attempt to quantize gravity would necessarily introduce a minimal observable length scale of the order of the Planck length. This conclusion is based on several different studies and thought experiments and appears to be an inescapable feature of all quantum gravity theories, irrespective of the method used to quantize gravity. Over the last few years there has been growing concern that such a minimal length might lead to a contradiction with the basic postulates of special relativity, in particular the Lorentz-Fitzgerald contraction. A few years ago, Rovelli et.al, attempted to reconcile an invariant minimal length with Special Relativity, using the framework of loop quantum gravity. However, the inherently canonical formalism of the loop quantum approach is plagued by a variety of problems, many brought on by separation of space and time co-ordinates. In this paper we use a completely different approach. Using the framework of the causal set paradigm, along with a statistical measure of closeness between Lorentzian manifolds, we re-examine the issue of introducing a minimal observable length that is not at odds with Special Relativity postulates.

  3. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  4. COMCAN: a computer program for common cause analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Marshall, N.H.; Wilson, J.R.

    1976-05-01

    The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.

  5. Minimizing both dropped formulas and concepts in knowledge fusion

    NASA Astrophysics Data System (ADS)

    Grégoire, Éric

    2006-04-01

    In this paper, a new family of approaches to fuse inconsistent knowledge sources is introduced in a standard logical setting. They combine two preference criteria to arbitrate between conflicting information: the minimization of falsified formulas and the minimization of the number of the different atoms that are involved in those formulas. Although these criteria exhibit a syntactical flavor, the approaches are semantically-defined.

  6. Learning free energy landscapes using artificial neural networks.

    PubMed

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  7. Implicitly causality enforced solution of multidimensional transient photon transport equation.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin

    2009-12-21

    A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided.

  8. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    NASA Astrophysics Data System (ADS)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  9. Judicial oversight of life-ending withdrawal of assisted nutrition and hydration in disorders of consciousness in the United Kingdom: A matter of life and death

    PubMed Central

    Rady, Mohamed Y; Verheijde, Joseph L.

    2017-01-01

    Mr Justice Baker delivered the Oxford Shrieval Lecture ‘A Matter of Life and Death’ on 11 October 2016. The lecture created public controversies about who can authorise withdrawal of assisted nutrition and hydration (ANH) in disorders of consciousness (DOC). The law requires court permission in ‘best interests’ decisions before ANH withdrawal only in permanent vegetative state and minimally conscious state. Some clinicians favour abandoning the need for court approval on the basis that clinicians are already empowered to withdraw ANH in other common conditions of DOC (e.g. coma, neurological disorders, etc.) based on their best interests assessment without court oversight. We set out a rationale in support of court oversight of best interests decisions in ANH withdrawal intended to end life in any person with DOC (who will lack relevant decision-making capacity). This ensures the safety of the general public and the protection of vulnerable disabled persons in society. PMID:28368210

  10. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.

    PubMed

    Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán

    2018-05-23

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  11. Cosmological perturbations in antigravity

    NASA Astrophysics Data System (ADS)

    Oltean, Marius; Brandenberger, Robert

    2014-10-01

    We compute the evolution of cosmological perturbations in a recently proposed Weyl-symmetric theory of two scalar fields with oppositely signed conformal couplings to Einstein gravity. It is motivated from the minimal conformal extension of the standard model, such that one of these scalar fields is the Higgs while the other is a new particle, the dilaton, introduced to make the Higgs mass conformally symmetric. At the background level, the theory admits novel geodesically complete cyclic cosmological solutions characterized by a brief period of repulsive gravity, or "antigravity," during each successive transition from a big crunch to a big bang. For simplicity, we consider scalar perturbations in the absence of anisotropies, with potential set to zero and without any radiation. We show that despite the necessarily wrong-signed kinetic term of the dilaton in the full action, these perturbations are neither ghostlike nor tachyonic in the limit of strongly repulsive gravity. On this basis, we argue—pending a future analysis of vector and tensor perturbations—that, with respect to perturbative stability, the cosmological solutions of this theory are viable.

  12. Rapid Radiofrequency Field Mapping In Vivo Using Single-Shot STEAM MRI

    PubMed Central

    Helms, Gunther; Finsterbusch, Jürgen; Weiskopf, Nikolaus; Dechent, Peter

    2008-01-01

    Higher field strengths entail less homogeneous RF fields. This may influence quantitative MRI and MRS. A method for rapidly mapping the RF field in the human head with minimal distortion was developed on the basis of a single-shot stimulated echo acquisition mode (STEAM) sequence. The flip angle of the second RF pulse in the STEAM preparation was set to 60° and 100° instead of 90°, inducing a flip angle-dependent signal change. A quadratic approximation of this trigonometric signal dependence together with a calibration accounting for slice excitation-related bias allowed for directly determining the RF field from the two measurements only. RF maps down to the level of the medulla could be obtained in less than 1 min and registered to anatomical volumes by means of the T2-weighted STEAM images. Flip angles between 75% and 125% of the nominal value were measured in line with other methods. Magn Reson Med 60:739–743, 2008. © 2008 Wiley-Liss, Inc. PMID:18727090

  13. Using a voice to put a name to a face: the psycholinguistics of proper name comprehension.

    PubMed

    Barr, Dale J; Jackson, Laura; Phillips, Isobel

    2014-02-01

    We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.

  14. On the adequacy of current empirical evaluations of formal models of categorization.

    PubMed

    Wills, Andy J; Pothos, Emmanuel M

    2012-01-01

    Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus).

  15. AUCTION MECHANISMS FOR IMPLEMENTING TRADABLE NETWORK PERMIT MARKETS

    NASA Astrophysics Data System (ADS)

    Wada, Kentaro; Akamatsu, Takashi

    This paper proposes a new auction mechanism for implementing the tradable network permit markets. Assuming that each user makes a trip from an origin to a destination along a path in a specific time period, we design an auction mechanism that enables each user to purchase a bundle of permits corresponding to a set of links in the user's preferred path. The objective of the proposed mechanism is to achieve a socially optimal state with minimal revelation of users' private information. In order to achieve this, the mechanism employs an evolutionary approach that has an auction phase and a path capacity adjustment phase, which are repeated on a day-to-day basis. We prove that the proposed mechanism has the following desirable properties: (1) truthful bidding is the dominant strategy for each user and (2) the proposed mechanism converges to an approximate socially optimal state in the sense that the achieved value of the social surplus reaches its maximum value when the number of users is large.

  16. The relative entropy is fundamental to adaptive resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less

  17. Learning free energy landscapes using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  18. On streak spacing in wall-bounded turbulent flows

    NASA Technical Reports Server (NTRS)

    Hamilton, James M.; Kim, John J.

    1993-01-01

    The present study is a continuation of the examination by Hamilton et al. of the regeneration mechanisms of near-wall turbulence and an attempt to investigate the conjecture of Waleffe et al. The basis of this study is an extension of the 'minimal channel' approach of Jimenez and Moin that emphasizes the near-wall region and reduces the complexity of the turbulent flow by considering a plane Couette flow of near minimum Reynolds number and stream-wise and span-wise extent. Reduction of the flow Reynolds number to the minimum value which will allow turbulence to be sustained has the effect of reducing the ratio of the largest scales to the smallest scales or, equivalently, of causing the near-wall region to fill more of the area between the channel walls. A plane Couette flow was chosen for study since this type of flow has a mean shear of a single sign, and at low Reynolds numbers, the two wall regions are found to share a single set of structures.

  19. High-quality unsaturated zone hydraulic property data for hydrologic applications

    USGS Publications Warehouse

    Perkins, Kimberlie; Nimmo, John R.

    2009-01-01

    In hydrologic studies, especially those using dynamic unsaturated zone moisture modeling, calculations based on property transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values has become increasingly common with the use of neural networks. High-quality data are needed for databases used in this way and for theoretical and property transfer model development and testing. Hydraulic properties predicted on the basis of existing databases may be adequate in some applications but not others. An obvious problem occurs when the available database has few or no data for samples that are closely related to the medium of interest. The data set presented in this paper includes saturated and unsaturated hydraulic conductivity, water retention, particle-size distributions, and bulk properties. All samples are minimally disturbed, all measurements were performed using the same state of the art techniques and the environments represented are diverse.

  20. Towards Resilient Critical Infrastructures: Application of Type-2 Fuzzy Logic in Embedded Network Security Cyber Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Jim Alves-Foss

    2011-08-01

    Resiliency and cyber security of modern critical infrastructures is becoming increasingly important with the growing number of threats in the cyber-environment. This paper proposes an extension to a previously developed fuzzy logic based anomaly detection network security cyber sensor via incorporating Type-2 Fuzzy Logic (T2 FL). In general, fuzzy logic provides a framework for system modeling in linguistic form capable of coping with imprecise and vague meanings of words. T2 FL is an extension of Type-1 FL which proved to be successful in modeling and minimizing the effects of various kinds of dynamic uncertainties. In this paper, T2 FL providesmore » a basis for robust anomaly detection and cyber security state awareness. In addition, the proposed algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental cyber-security test-bed.« less

  1. The design and implementation of a parallel unstructured Euler solver using software primitives

    NASA Technical Reports Server (NTRS)

    Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.

    1992-01-01

    This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.

  2. A Mars environmental survey (MESUR) - Feasibility of a low cost global approach

    NASA Technical Reports Server (NTRS)

    Hubbard, G. S.; Wercinski, Paul F.; Sarver, George L.; Hanel, Robert P.; Ramos, Ruben

    1991-01-01

    In situ measurements of Mars' surface and atmosphere are the objectives of a novel network mission concept called the Mars Environmental SURvey (MESUR). As envisioned, the MESUR mission will emplace a pole-to-pole global distribution of 16 landers on the Martian surface over three launch opportunites using medium-lift (Delta-class) launch vehicles. The basic concept is to deploy small free-flying probes which would directly enter the Martian atmosphere, measure the upper atmospheric structure, image the local terrain before landing, and survive landing to perform meteorology, seismology, surface imaging, and soil chemistry measurements. Data will be returned via dedicated relay orbiter or direct-to-earth transmission. The mission philosophy is to: (1) 'grow' a network over a period of years using a series of launch opportunities; (2) develop a level-of-effort which is flexible and responsive to a broad set of objectives; (3) focus on Mars science while providing a solid basis for future human presence; and (4) minimize overall project cost and complexity wherever possible.

  3. Designer antibacterial peptides kill fluoroquinolone-resistant clinical isolates.

    PubMed

    Otvos, Laszlo; Wade, John D; Lin, Feng; Condie, Barry A; Hanrieder, Joerg; Hoffmann, Ralf

    2005-08-11

    A significant number of Escherichia coli and Klebsiella pneumoniae bacterial strains in urinary tract infections are resistant to fluoroquinolones. Peptide antibiotics are viable alternatives although these are usually either toxic or insufficiently active. By applying multiple alignment and sequence optimization steps, we designed multifunctional proline-rich antibacterial peptides that maintained their DnaK-binding ability in bacteria and low toxicity in eukaryotes, but entered bacterial cells much more avidly than earlier peptide derivatives. The resulting chimeric and statistical analogues exhibited 8-32 microg/mL minimal inhibitory concentration efficacies in Muller-Hinton broth against a series of clinical pathogens. Significantly, the best peptide, compound 5, A3-APO, retained full antibacterial activity in the presence of mouse serum. Across a set of eight fluoroquinolone-resistant clinical isolates, peptide 5 was 4 times more potent than ciprofloxacin. On the basis of the in vitro efficacy, toxicity, and pharmacokinetics data, we estimate that peptide 5 will be suitable for treating infections in the 3-5 mg/kg dose range.

  4. The relative entropy is fundamental to adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  5. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  6. The Revolution Continues: Newly Discovered Systems Expand the CRISPR-Cas Toolkit.

    PubMed

    Murugan, Karthik; Babu, Kesavan; Sundaresan, Ramya; Rajan, Rakhi; Sashital, Dipali G

    2017-10-05

    CRISPR-Cas systems defend prokaryotes against bacteriophages and mobile genetic elements and serve as the basis for revolutionary tools for genetic engineering. Class 2 CRISPR-Cas systems use single Cas endonucleases paired with guide RNAs to cleave complementary nucleic acid targets, enabling programmable sequence-specific targeting with minimal machinery. Recent discoveries of previously unidentified CRISPR-Cas systems have uncovered a deep reservoir of potential biotechnological tools beyond the well-characterized Type II Cas9 systems. Here we review the current mechanistic understanding of newly discovered single-protein Cas endonucleases. Comparison of these Cas effectors reveals substantial mechanistic diversity, underscoring the phylogenetic divergence of related CRISPR-Cas systems. This diversity has enabled further expansion of CRISPR-Cas biotechnological toolkits, with wide-ranging applications from genome editing to diagnostic tools based on various Cas endonuclease activities. These advances highlight the exciting prospects for future tools based on the continually expanding set of CRISPR-Cas systems. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Trends in computer applications in science assessment

    NASA Astrophysics Data System (ADS)

    Kumar, David D.; Helgeson, Stanley L.

    1995-03-01

    Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.

  8. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock

    PubMed Central

    2018-01-01

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  9. 2016 American College of Rheumatology/European League Against Rheumatism Criteria for Minimal, Moderate, and Major Clinical Response in Adult Dermatomyositis and Polymyositis: An International Myositis Assessment and Clinical Studies Group/Paediatric Rheumatology International Trials Organisation Collaborative Initiative.

    PubMed

    Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri

    2017-05-01

    To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.

  10. Positivity of the universal pairing in 3 dimensions

    NASA Astrophysics Data System (ADS)

    Calegari, Danny; Freedman, Michael H.; Walker, Kevin

    2010-01-01

    Associated to a closed, oriented surface S is the complex vector space with basis the set of all compact, oriented 3 -manifolds which it bounds. Gluing along S defines a Hermitian pairing on this space with values in the complex vector space with basis all closed, oriented 3 -manifolds. The main result in this paper is that this pairing is positive, i.e. that the result of pairing a nonzero vector with itself is nonzero. This has bearing on the question of what kinds of topological information can be extracted in principle from unitary (2+1) -dimensional TQFTs. The proof involves the construction of a suitable complexity function c on all closed 3 -manifolds, satisfying a gluing axiom which we call the topological Cauchy-Schwarz inequality, namely that c(AB) le max(c(AA),c(BB)) for all A,B which bound S , with equality if and only if A=B . The complexity function c involves input from many aspects of 3 -manifold topology, and in the process of establishing its key properties we obtain a number of results of independent interest. For example, we show that when two finite-volume hyperbolic 3 -manifolds are glued along an incompressible acylindrical surface, the resulting hyperbolic 3 -manifold has minimal volume only when the gluing can be done along a totally geodesic surface; this generalizes a similar theorem for closed hyperbolic 3 -manifolds due to Agol-Storm-Thurston.

  11. Multisensor fusion for 3-D defect characterization using wavelet basis function neural networks

    NASA Astrophysics Data System (ADS)

    Lim, Jaein; Udpa, Satish S.; Udpa, Lalita; Afzal, Muhammad

    2001-04-01

    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, has the ability to draw inferences that may not be feasible with data from a single sensor alone. In this paper, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL. Data is fused at the signal level. If the flux is oriented axially, the samples of the axial signal are measured along a direction parallel to the flaw, while the circumferential signal is measured in a direction that is perpendicular to the flaw. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. A boundary extraction algorithm is used to extract the defect areas in the image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. Finally, a wavelet basis function (WBF) neural network is employed to map the complex valued image appropriately to obtain the geometrical profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. Results show the effectiveness of the approach.

  12. Geolocalisation of athletes for out-of-competition drug testing: ethical considerations. Position statement by the WADA Ethics Panel

    PubMed Central

    Caulfield, Timothy; Estivill, Xavier; Loland, Sigmund; McNamee, Michael; Knoppers, Bartha Maria

    2018-01-01

    Through the widespread availability of location-identifying devices, geolocalisation could potentially be used to place athletes during out-of-competition testing. In light of this debate, the WADA Ethics Panel formulated the following questions: (1) should WADA and/or other sponsors consider funding such geolocalisation research projects?, (2) if successful, could they be proposed to athletes as a complementary device to Anti-Doping Administration and Management System to help geolocalisation and reduce the risk of missed tests? and (3) should such devices be offered on a voluntary basis, or is it conceivable that they would be made mandatory for all athletes in registered testing pools? In this position paper, the WADA Ethics Panel concludes that the use of geolocalisation could be useful in a research setting with the goal of understanding associations between genotype, phenotype and environment; however, it recognises that the use of geolocalisation as part of or as replacement of whereabouts rules is replete with ethical concerns. While benefits remain largely hypothetical and minimal, the potential invasion of privacy and the data security threats are real. Considering the impact on privacy, data security issues, the societal ramifications of offering such services and various pragmatic considerations, the WADA Ethics Panel concludes that at this time, the use of geolocalisation should neither be mandated as a tool for disclosing whereabouts nor implemented on a voluntary basis. PMID:29500253

  13. Light focusing through a multiple scattering medium: ab initio computer simulation

    NASA Astrophysics Data System (ADS)

    Danko, Oleksandr; Danko, Volodymyr; Kovalenko, Andrey

    2018-01-01

    The present study considers ab initio computer simulation of the light focusing through a complex scattering medium. The focusing is performed by shaping the incident light beam in order to obtain a small focused spot on the opposite side of the scattering layer. MSTM software (Auburn University) is used to simulate the propagation of an arbitrary monochromatic Gaussian beam and obtain 2D distribution of the optical field in the selected plane of the investigated volume. Based on the set of incident and scattered fields, the pair of right and left eigen bases and corresponding singular values were calculated. The pair of right and left eigen modes together with the corresponding singular value constitute the transmittance eigen channel of the disordered media. Thus, the scattering process is described in three steps: 1) initial field decomposition in the right eigen basis; 2) scaling of decomposition coefficients for the corresponding singular values; 3) assembling of the scattered field as the composition of the weighted left eigen modes. Basis fields are represented as a linear combination of the original Gaussian beams and scattered fields. It was demonstrated that 60 independent control channels provide focusing the light into a spot with the minimal radius of approximately 0.4 μm at half maximum. The intensity enhancement in the focal plane was equal to 68 that coincided with theoretical prediction.

  14. The Cooperative Infiltration of Science Education.

    ERIC Educational Resources Information Center

    Beder, Sharon

    1998-01-01

    Argues that educational materials produced by corporations and industry associations tend to give a corporate view of environmental problems, often casting doubt on the scientific basis for environmental regulation and promoting superficial solutions that have minimal impact on their operations. Contains 30 references. (Author/DDR)

  15. UV TREATMENT FOR CONTROL OF AEROMONAS (RM.C.M.6)

    EPA Science Inventory

    The data and related interpretations that will be developed in this research will form the scientific basis for analysis, design, and regulation of polychromatic UV disinfection systems. At present, only minimal data regarding the wavelength-specific nature of microbial dose-resp...

  16. Real-time combustion monitoring of PCDD/F indicators by REMPI-TOFMS

    EPA Science Inventory

    Analyses for polychlorinated dibenzodioxin and dibenzofuran (PCDD/F) emissions typically require a 4 h extractive sample taken on an annual or less frequent basis. This results in a potentially minimally representative monitoring scheme. More recently, methods for continual sampl...

  17. 40 CFR 60.4805 - What is a siting analysis?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... siting analysis must consider air pollution control alternatives that minimize, on a site-specific basis, to the maximum extent practicable, potential risks to public health or the environment, including... Section 60.4805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  18. 40 CFR 60.4805 - What is a siting analysis?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... siting analysis must consider air pollution control alternatives that minimize, on a site-specific basis, to the maximum extent practicable, potential risks to public health or the environment, including... Section 60.4805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  19. 40 CFR 60.4805 - What is a siting analysis?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... siting analysis must consider air pollution control alternatives that minimize, on a site-specific basis, to the maximum extent practicable, potential risks to public health or the environment, including... Section 60.4805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dirk Gombert; Jay Roach

    The U. S. Department of Energy (DOE) Global Nuclear Energy Partnership (GNEP) was announced in 2006. As currently envisioned, GNEP will be the basis for growth of nuclear energy worldwide, using a closed proliferation-resistant fuel cycle. The Integrated Waste Management Strategy (IWMS) is designed to ensure that all wastes generated by fuel fabrication and recycling will have a routine disposition path making the most of feedback to fuel and recycling operations to eliminate or minimize byproducts and wastes. If waste must be generated, processes will be designed with waste treatment in mind to reduce use of reagents that complicate stabilizationmore » and minimize volume. The IWMS will address three distinct levels of technology investigation and systems analyses and will provide a cogent path from (1) research and development (R&D) and engineering scale demonstration, (Level I); to (2) full scale domestic deployment (Level II); and finally to (3) establishing an integrated global nuclear energy infrastructure (Level III). The near-term focus of GNEP is on achieving a basis for large-scale commercial deployment (Level II), including the R&D and engineering scale activities in Level I that are necessary to support such an accomplishment. Throughout these levels is the need for innovative thinking to simplify, including regulations, separations and waste forms to minimize the burden of safe disposition of wastes on the fuel cycle.« less

  1. Radial Symmetry of p-Harmonic Minimizers

    NASA Astrophysics Data System (ADS)

    Koski, Aleksis; Onninen, Jani

    2018-03-01

    "It is still not known if the radial cavitating minimizers obtained by uc(Ball) (Philos Trans R Soc Lond A 306:557-611, 1982) (and subsequently by many others) are global minimizers of any physically reasonable nonlinearly elastic energy". This quotation is from uc(Sivaloganathan) and uc(Spector) (Ann Inst Henri Poincaré Anal Non Linéaire 25(1):201-213, 2008) and seems to be still accurate. The model case of the p-harmonic energy is considered here. We prove that the planar radial minimizers are indeed the global minimizers provided we prescribe the admissible deformations on the boundary. In the traction free setting, however, even the identity map need not be a global minimizer.

  2. Asymptotic behavior and interpretation of virtual states: The effects of confinement and of basis sets

    NASA Astrophysics Data System (ADS)

    Boffi, Nicholas M.; Jain, Manish; Natan, Amir

    2016-02-01

    A real-space high order finite difference method is used to analyze the effect of spherical domain size on the Hartree-Fock (and density functional theory) virtual eigenstates. We show the domain size dependence of both positive and negative virtual eigenvalues of the Hartree-Fock equations for small molecules. We demonstrate that positive states behave like a particle in spherical well and show how they approach zero. For the negative eigenstates, we show that large domains are needed to get the correct eigenvalues. We compare our results to those of Gaussian basis sets and draw some conclusions for real-space, basis-sets, and plane-waves calculations.

  3. Exact solution for the hydrogen atom confined by a dielectric continuum and the correct basis set to study many-electron atoms under similar confinements

    NASA Astrophysics Data System (ADS)

    Martínez-Sánchez, Michael-Adán; Aquino, Norberto; Vargas, Rubicelia; Garza, Jorge

    2017-12-01

    The Schrödinger equation associated to the hydrogen atom confined by a dielectric continuum is solved exactly and suggests the appropriate basis set to be used when an atom is immersed in a dielectric continuum. Exact results show that this kind of confinement spread the electron density, which is confirmed through the Shannon entropy. The basis set suggested by the exact results is similar to Slater type orbitals and it was applied on two-electron atoms, where the H- ion ejects one electron for moderate confinements for distances much larger than those commonly used to generate cavities in solvent models.

  4. CCSD(T) potential energy and induced dipole surfaces for N2–H2(D2): retrieval of the collision-induced absorption integrated intensities in the regions of the fundamental and first overtone vibrational transitions.

    PubMed

    Buryak, Ilya; Lokshtanov, Sergei; Vigasin, Andrey

    2012-09-21

    The present work aims at ab initio characterization of the integrated intensity temperature variation of collision-induced absorption (CIA) in N(2)-H(2)(D(2)). Global fits of potential energy surface (PES) and induced dipole moment surface (IDS) were made on the basis of CCSD(T) (coupled cluster with single and double and perturbative triple excitations) calculations with aug-cc-pV(T,Q)Z basis sets. Basis set superposition error correction and extrapolation to complete basis set (CBS) limit techniques were applied to both energy and dipole moment. Classical second cross virial coefficient calculations accounting for the first quantum correction were employed to prove the quality of the obtained PES. The CIA temperature dependence was found in satisfactory agreement with available experimental data.

  5. Usability Guidelines for Product Recommenders Based on Example Critiquing Research

    NASA Astrophysics Data System (ADS)

    Pu, Pearl; Faltings, Boi; Chen, Li; Zhang, Jiyong; Viappiani, Paolo

    Over the past decade, our group has developed a suite of decision tools based on example critiquing to help users find their preferred products in e-commerce environments. In this chapter, we survey important usability research work relative to example critiquing and summarize the major results by deriving a set of usability guidelines. Our survey is focused on three key interaction activities between the user and the system: the initial preference elicitation process, the preference revision process, and the presentation of the systems recommendation results. To provide a basis for the derivation of the guidelines, we developed a multi-objective framework of three interacting criteria: accuracy, confidence, and effort (ACE). We use this framework to analyze our past work and provide a specific context for each guideline: when the system should maximize its ability to increase users' decision accuracy, when to increase user confidence, and when to minimize the interaction effort for the users. Due to the general nature of this multi-criteria model, the set of guidelines that we propose can be used to ease the usability engineering process of other recommender systems, especially those used in e-commerce environments. The ACE framework presented here is also the first in the field to evaluate the performance of preference-based recommenders from a user-centric point of view.

  6. Implementation of genetic conservation practices in a muskellunge propagation and stocking program

    USGS Publications Warehouse

    Jennings, Martin J.; Sloss, Brian L.; Hatzenbeler, Gene R.; Kampa, Jeffrey M.; Simonson, Timothy D.; Avelallemant, Steven P.; Lindenberger, Gary A.; Underwood, Bruce D.

    2010-01-01

    Conservation of genetic resources is a challenging issue for agencies managing popular sport fishes. To address the ongoing potential for genetic risks, we developed a comprehensive set of recommendations to conserve genetic diversity of muskellunge (Esox masquinongy) in Wisconsin, and evaluated the extent to which the recommendations can be implemented. Although some details are specific to Wisconsin's muskellunge propagation program, many of the practical issues affecting implementation are applicable to other species and production systems. We developed guidelines to restrict future broodstock collection operations to lakes with natural reproduction and to develop a set of brood lakes to use on a rotational basis within regional stock boundaries, but implementation will require considering lakes with variable stocking histories. Maintaining an effective population size sufficient to minimize the risk of losing alleles requires limiting broodstock collection to large lakes. Recommendations to better approximate the temporal distribution of spawning in hatchery operations and randomize selection of brood fish are feasible. Guidelines to modify rearing and distribution procedures face some logistic constraints. An evaluation of genetic diversity of hatchery-produced fish during 2008 demonstrated variable success representing genetic variation of the source population. Continued evaluation of hatchery operations will optimize operational efficiency while moving toward genetic conservation goals.

  7. Implementation of genetic conservation practices in a muskellunge propagation and stocking program

    USGS Publications Warehouse

    Jennings, Martin J.; Sloss, Brian L.; Hatzenbeler, Gene R.; Kampa, Jeffrey M.; Simonson, Timothy D.; Avelallemant, Steven P.; Lindenberger, Gary A.; Underwood, Bruce D.

    2010-01-01

    Conservation of genetic resources is a challenging issue for agencies managing popular sport fishes. To address the ongoing potential for genetic risks, we developed a comprehensive set of recommendations to conserve genetic diversity of muskellunge (Esox masquinongy) in Wisconsin, and evaluated the extent to which the recommendations can be implemented. Although some details are specific to Wisconsin's muskellunge propagation program, many of the practical issues affecting implementation are applicable to other species and production systems. We developed guidelines to restrict future brood stock collection operations to lakes with natural reproduction and to develop a set of brood lakes to use on a rotational basis within regional stock boundaries, but implementation will require considering lakes with variable stocking histories. Maintaining an effective population size sufficient to minimize the risk of losing alleles requires limiting brood stock collection to large lakes. Recommendations to better approximate the temporal distribution of spawning in hatchery operations and randomize selection of brood fish are feasible. Guidelines to modify rearing and distribution procedures face some logistic constraints. An evaluation of genetic diversity of hatchery-produced fish during 2008 demonstrated variable success representing genetic variation of the source population. Continued evaluation of hatchery operations will optimize operational efficiency while moving toward genetic conservation goals.

  8. Performance measurement in cancer care: uses and challenges.

    PubMed

    Lazar, G S; Desch, C E

    1998-05-15

    Unnecessary, inappropriate, and futile care are given in all areas of health care including cancer care. Not only does such care increase costs and waste precious resources, but patients may have adverse outcomes when the wrong care is given. One of the ways to address this issue is to measure performance with the use of administrative data sets. Through performance measurement, the best providers can be chosen, providers can be rewarded on the basis of the quality of their performance, opportunities for improvement can be identified, and variation in practice can be minimized. Purchasers should take leadership role in creating data sets that will enhance, clinical performance. Specifically, purchasers should require the following from payers: 1) staging information; 2) requirements and/or incentives for proper International Classification of Diseases coding, including other important (comorbid) conditions; 3) incentives or requirements for proper data collection if the payer is using a reimbursement strategy that places the risk on the provider; and 4) a willingness to collect and report information to providers of care, with a view toward increasing quality and decreasing the costs of cancer care. Demanding better clinical performance can lead to better outcomes. Once good data is presented to patients and providers, better clinical behavior and improved cancer care systems will quickly follow.

  9. A theory for protein dynamics: Global anisotropy and a normal mode approach to local complexity

    NASA Astrophysics Data System (ADS)

    Copperman, Jeremy; Romano, Pablo; Guenza, Marina

    2014-03-01

    We propose a novel Langevin equation description for the dynamics of biological macromolecules by projecting the solvent and all atomic degrees of freedom onto a set of coarse-grained sites at the single residue level. We utilize a multi-scale approach where molecular dynamic simulations are performed to obtain equilibrium structural correlations input to a modified Rouse-Zimm description which can be solved analytically. The normal mode solution provides a minimal basis set to account for important properties of biological polymers such as the anisotropic global structure, and internal motion on a complex free-energy surface. This multi-scale modeling method predicts the dynamics of both global rotational diffusion and constrained internal motion from the picosecond to the nanosecond regime, and is quantitative when compared to both simulation trajectory and NMR relaxation times. Utilizing non-equilibrium sampling techniques and an explicit treatment of the free-energy barriers in the mode coordinates, the model is extended to include biologically important fluctuations in the microsecond regime, such as bubble and fork formation in nucleic acids, and protein domain motion. This work supported by the NSF under the Graduate STEM Fellows in K-12 Education (GK-12) program, grant DGE-0742540 and NSF grant DMR-0804145, computational support from XSEDE and ACISS.

  10. Narrowing the error in electron correlation calculations by basis set re-hierarchization and use of the unified singlet and triplet electron-pair extrapolation scheme: Application to a test set of 106 systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.

    2014-12-14

    A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less

  11. The convergence of complete active space self-consistent-field configuration interaction including all single and double excitation energies to the complete basis set limit

    NASA Astrophysics Data System (ADS)

    Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew

    2006-07-01

    Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.

  12. 48 CFR 25.504-4 - Group award basis.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...

  13. 48 CFR 25.504-4 - Group award basis.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...

  14. 48 CFR 25.504-4 - Group award basis.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American statute applies and the acquisition cannot be set aside for...

  15. 48 CFR 25.504-4 - Group award basis.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...

  16. 48 CFR 25.504-4 - Group award basis.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...

  17. Progress in the development of paper-based diagnostics for low-resource point-of-care settings

    PubMed Central

    Byrnes, Samantha; Thiessen, Gregory; Fu, Elain

    2014-01-01

    This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361

  18. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  19. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  20. Spectroscopic properties of Arx-Zn and Arx-Ag+ (x = 1,2) van der Waals complexes

    NASA Astrophysics Data System (ADS)

    Oyedepo, Gbenga A.; Peterson, Charles; Schoendorff, George; Wilson, Angela K.

    2013-03-01

    Potential energy curves have been constructed using coupled cluster with singles, doubles, and perturbative triple excitations (CCSD(T)) in combination with all-electron and pseudopotential-based multiply augmented correlation consistent basis sets [m-aug-cc-pV(n + d)Z; m = singly, doubly, triply, n = D,T,Q,5]. The effect of basis set superposition error on the spectroscopic properties of Ar-Zn, Ar2-Zn, Ar-Ag+, and Ar2-Ag+ van der Waals complexes was examined. The diffuse functions of the doubly and triply augmented basis sets have been constructed using the even-tempered expansion. The a posteriori counterpoise scheme of Boys and Bernardi and its generalized variant by Valiron and Mayer has been utilized to correct for basis set superposition error (BSSE) in the calculated spectroscopic properties for diatomic and triatomic species. It is found that even at the extrapolated complete basis set limit for the energetic properties, the pseudopotential-based calculations still suffer from significant BSSE effects unlike the all-electron basis sets. This indicates that the quality of the approximations used in the design of pseudopotentials could have major impact on a seemingly valence-exclusive effect like BSSE. We confirm the experimentally determined equilibrium internuclear distance (re), binding energy (De), harmonic vibrational frequency (ωe), and C1Π ← X1Σ transition energy for ArZn and also predict the spectroscopic properties for the low-lying excited states of linear Ar2-Zn (X1Σg, 3Πg, 1Πg), Ar-Ag+ (X1Σ, 3Σ, 3Π, 3Δ, 1Σ, 1Π, 1Δ), and Ar2-Ag+ (X1Σg, 3Σg, 3Πg, 3Δg, 1Σg, 1Πg, 1Δg) complexes, using the CCSD(T) and MR-CISD + Q methods, to aid in their experimental characterizations.

  1. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  2. A projection-free method for representing plane-wave DFT results in an atom-centered basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunnington, Benjamin D.; Schmidt, J. R., E-mail: schmidt@chem.wisc.edu

    2015-09-14

    Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strictmore » orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches.« less

  3. Pectoral Fascial (PECS) I and II Blocks as Rescue Analgesia in a Patient Undergoing Minimally Invasive Cardiac Surgery.

    PubMed

    Yalamuri, Suraj; Klinger, Rebecca Y; Bullock, W Michael; Glower, Donald D; Bottiger, Brandi A; Gadsden, Jeffrey C

    Patients undergoing minimally invasive cardiac surgery have the potential for significant pain from the thoracotomy site. We report the successful use of pectoral nerve block types I and II (Pecs I and II) as rescue analgesia in a patient undergoing minimally invasive mitral valve repair. In this case, a 78-year-old man, with no history of chronic pain, underwent mitral valve repair via right anterior thoracotomy for severe mitral regurgitation. After extubation, he complained of 10/10 pain at the incision site that was minimally responsive to intravenous opioids. He required supplemental oxygen because of poor pulmonary mechanics, with shallow breathing and splinting due to pain, and subsequent intensive care unit readmission. Ultrasound-guided Pecs I and II blocks were performed on the right side with 30 mL of 0.2% ropivacaine with 1:400,000 epinephrine. The blocks resulted in near-complete chest wall analgesia and improved pulmonary mechanics for approximately 24 hours. After the single-injection blocks regressed, a second set of blocks was performed with 266 mg of liposomal bupivacaine mixed with bupivacaine. This second set of blocks provided extended analgesia for an additional 48 hours. The patient was weaned rapidly from supplemental oxygen after the blocks because of improved analgesia. Pectoral nerve blocks have been described in the setting of breast surgery to provide chest wall analgesia. We report the first successful use of Pecs blocks to provide effective chest wall analgesia for a patient undergoing minimally invasive cardiac surgery with thoracotomy. We believe that these blocks may provide an important nonopioid option for the management of pain during recovery from minimally invasive cardiac surgery.

  4. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes

    NASA Astrophysics Data System (ADS)

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-01

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  5. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes.

    PubMed

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-28

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  6. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  7. Youth Sports Clubs' Potential as Health-Promoting Setting: Profiles, Motives and Barriers

    ERIC Educational Resources Information Center

    Meganck, Jeroen; Scheerder, Jeroen; Thibaut, Erik; Seghers, Jan

    2015-01-01

    Setting and Objective: For decades, the World Health Organisation has promoted settings-based health promotion, but its application to leisure settings is minimal. Focusing on organised sports as an important leisure activity, the present study had three goals: exploring the health promotion profile of youth sports clubs, identifying objective…

  8. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  9. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  10. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  11. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  12. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  13. Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group

    NASA Astrophysics Data System (ADS)

    Ardentov, Andrei A.; Sachkov, Yuri L.

    2017-12-01

    We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.

  14. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  15. Analytical energy gradients for explicitly correlated wave functions. I. Explicitly correlated second-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Győrffy, Werner; Knizia, Gerald; Werner, Hans-Joachim

    2017-12-01

    We present the theory and algorithms for computing analytical energy gradients for explicitly correlated second-order Møller-Plesset perturbation theory (MP2-F12). The main difficulty in F12 gradient theory arises from the large number of two-electron integrals for which effective two-body density matrices and integral derivatives need to be calculated. For efficiency, the density fitting approximation is used for evaluating all two-electron integrals and their derivatives. The accuracies of various previously proposed MP2-F12 approximations [3C, 3C(HY1), 3*C(HY1), and 3*A] are demonstrated by computing equilibrium geometries for a set of molecules containing first- and second-row elements, using double-ζ to quintuple-ζ basis sets. Generally, the convergence of the bond lengths and angles with respect to the basis set size is strongly improved by the F12 treatment, and augmented triple-ζ basis sets are sufficient to closely approach the basis set limit. The results obtained with the different approximations differ only very slightly. This paper is the first step towards analytical gradients for coupled-cluster singles and doubles with perturbative treatment of triple excitations, which will be presented in the second part of this series.

  16. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  17. Fast graph-based relaxed clustering for large data sets using minimal enclosing ball.

    PubMed

    Qian, Pengjiang; Chung, Fu-Lai; Wang, Shitong; Deng, Zhaohong

    2012-06-01

    Although graph-based relaxed clustering (GRC) is one of the spectral clustering algorithms with straightforwardness and self-adaptability, it is sensitive to the parameters of the adopted similarity measure and also has high time complexity O(N(3)) which severely weakens its usefulness for large data sets. In order to overcome these shortcomings, after introducing certain constraints for GRC, an enhanced version of GRC [constrained GRC (CGRC)] is proposed to increase the robustness of GRC to the parameters of the adopted similarity measure, and accordingly, a novel algorithm called fast GRC (FGRC) based on CGRC is developed in this paper by using the core-set-based minimal enclosing ball approximation. A distinctive advantage of FGRC is that its asymptotic time complexity is linear with the data set size N. At the same time, FGRC also inherits the straightforwardness and self-adaptability from GRC, making the proposed FGRC a fast and effective clustering algorithm for large data sets. The advantages of FGRC are validated by various benchmarking and real data sets.

  18. Application of quadratic optimization to supersonic inlet control.

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1972-01-01

    This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.

  19. The Topological Basis Realization for Six Qubits and the Corresponding Heisenberg Spin -{1/2} Chain Model

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang

    2018-03-01

    In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.

  20. Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2014-03-01

    We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.

  1. The Topological Basis Realization for Six Qubits and the Corresponding Heisenberg Spin-1/2 Chain Model

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang

    2018-06-01

    In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5 D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.

  2. Positive end-expiratory pressure at minimal respiratory elastance represents the best compromise between mechanical stress and lung aeration in oleic acid induced lung injury.

    PubMed

    Carvalho, Alysson Roncally S; Jandre, Frederico C; Pino, Alexandre V; Bozza, Fernando A; Salluh, Jorge; Rodrigues, Rosana; Ascoli, Fabio O; Giannella-Neto, Antonio

    2007-01-01

    Protective ventilatory strategies have been applied to prevent ventilator-induced lung injury in patients with acute lung injury (ALI). However, adjustment of positive end-expiratory pressure (PEEP) to avoid alveolar de-recruitment and hyperinflation remains difficult. An alternative is to set the PEEP based on minimizing respiratory system elastance (Ers) by titrating PEEP. In the present study we evaluate the distribution of lung aeration (assessed using computed tomography scanning) and the behaviour of Ers in a porcine model of ALI, during a descending PEEP titration manoeuvre with a protective low tidal volume. PEEP titration (from 26 to 0 cmH2O, with a tidal volume of 6 to 7 ml/kg) was performed, following a recruitment manoeuvre. At each PEEP, helical computed tomography scans of juxta-diaphragmatic parts of the lower lobes were obtained during end-expiratory and end-inspiratory pauses in six piglets with ALI induced by oleic acid. The distribution of the lung compartments (hyperinflated, normally aerated, poorly aerated and non-aerated areas) was determined and the Ers was estimated on a breath-by-breath basis from the equation of motion of the respiratory system using the least-squares method. Progressive reduction in PEEP from 26 cmH2O to the PEEP at which the minimum Ers was observed improved poorly aerated areas, with a proportional reduction in hyperinflated areas. Also, the distribution of normally aerated areas remained steady over this interval, with no changes in non-aerated areas. The PEEP at which minimal Ers occurred corresponded to the greatest amount of normally aerated areas, with lesser hyperinflated, and poorly and non-aerated areas. Levels of PEEP below that at which minimal Ers was observed increased poorly and non-aerated areas, with concomitant reductions in normally inflated and hyperinflated areas. The PEEP at which minimal Ers occurred, obtained by descending PEEP titration with a protective low tidal volume, corresponded to the greatest amount of normally aerated areas, with lesser collapsed and hyperinflated areas. The institution of high levels of PEEP reduced poorly aerated areas but enlarged hyperinflated ones. Reduction in PEEP consistently enhanced poorly or non-aerated areas as well as tidal re-aeration. Hence, monitoring respiratory mechanics during a PEEP titration procedure may be a useful adjunct to optimize lung aeration.

  3. Lotus utahensis: southern great basin legume for possible use in rangeland revegetation

    USDA-ARS?s Scientific Manuscript database

    Rangeland ecosystems in the western USA are increasingly vulnerable to wildland fires, weed invasion, and mismanagement. On many of these rangelands, revegetation/restoration may be required to improve degraded conditions, speed recovery, and minimize soil erosion. Legumes native to the Great Basi...

  4. 42 CFR 441.464 - State assurances.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... decision-making about the election of self-direction and provided on a timely basis to an individual or the representative which minimally includes the following: (i) Elements of self-direction compared to non-self... the service plan and service budget. (v) Grievance process. (vi) Risks and responsibilities of self...

  5. Context-sensitive autoassociative memories as expert systems in medical diagnosis

    PubMed Central

    Pomi, Andrés; Olivera, Fernando

    2006-01-01

    Background The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences. Methods We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings. Results We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84. Conclusion Context-dependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the model is a highly promising alternative in the development of accuracy diagnostic tools. PMID:17121675

  6. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  7. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  8. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  9. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  10. Universal behavior of generalized causal set d’Alembertians in curved spacetime

    NASA Astrophysics Data System (ADS)

    Belenchia, Alessio

    2016-07-01

    Causal set non-local wave operators allow both for the definition of an action for causal set theory and the study of deviations from local physics that may have interesting phenomenological consequences. It was previously shown that, in all dimensions, the (unique) minimal discrete operators give averaged continuum non-local operators that reduce to \\square -R/2 in the local limit. Recently, dropping the constraint of minimality, it was shown that there exist an infinite number of discrete operators satisfying basic physical requirements and with the right local limit in flat spacetime. In this work, we consider this entire class of generalized causal set d’Alembertins in curved spacetimes and extend to them the result about the universality of the -R/2 factor. Finally, we comment on the relation of this result to the Einstein equivalence principle.

  11. Microendoscopy of the eustachian tube and the middle ear

    NASA Astrophysics Data System (ADS)

    Hopf, Juergen U. G.; Linnarz, Marietta; Gundlach, Peter; Scherer, Hans H.; Lutze-Koffroth, C.; Loerke, S.; Voege, Karl H.; Tschepe, Johannes; Mueller, Gerhard J.

    1992-08-01

    Progressive miniaturization of flexible fiberoptic instruments has made it possible to perform atraumatic endoscopy of the Eustachian tube and tympanic cavity with an intact ear drum. By means of a special set of carrier- and balloon-catheters which are partly actively steerable, flexible microendoscopes with outside diameters of 290 - 700 micrometers are inserted through the nasal cavity into the nasopharyngeal opening of the Eustachian tube and carefully advanced into the middle ear compartment under permanent direct visual control. Second generation microendoscopes with outside diameters of 750 to 1000 micrometers are equipped with a one- direction tip-steering mechanism which allows deflection up to 90 degrees. In addition to it, the use of two special types of four-function scopes (outside diameter: 1.6 mm and 1.8 mm) fitted with a one-lumen working channel are presented. This new technique of `Transnasal Tubo-Tympanoscopy' (TTT) only needs local anesthesia, normally is performed on an outpatient basis, and is indicated for the diagnosis of any disturbances of the sound conducting apparatus (ear drum and ossicular chain) like chronic otitis media and oto-sclerosis and of those sensorineural hearing disorders on which -- until today -- only the traditional surgical tympanoscopy could provide morphological information on the pathogenesis of the hearing loss, e.g., on assumed round window ruptures. By this minimal invasive and minimal traumatizing method pathological alterations of the ossicular chain as well as obstructions in the cartilaginous and the osseous part of the Eustachian tube can be directly visualized.

  12. Increasing farmers' adoption of agricultural index insurance: The search for a better index

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, C. P.

    2015-12-01

    The weather index insurance promises to provide farmers' financial resilience when struck by adverse weather conditions, owing to its minimal moral hazard, low transaction cost, and swift compensation. Despite these advantages, the index insurance has so far received low level of adoption. One of the major causes is the presence of "basis risk"—the risk of getting an insurance payoff that falls short of the actual losses. One source of this basis risk is the production basis risk—the probability that the selected weather indexes and their thresholds do not correspond to actual damages. Here, we investigate how to reduce this production basis risk, using current knowledge in non-linear analysis and stochastic modeling from the fields of ecology and hydrology. We demonstrate how the inclusion of rainfall stochasticity can reduce production basis risk while identifying events that do not need to be insured. Through these findings, we show how much we can improve farmers' adoption of agricultural index insurance under different design contexts.

  13. Unique reflector arrangement within very wide field of view for multibeam antennas

    NASA Astrophysics Data System (ADS)

    Dragone, C.

    1983-12-01

    It is pointed out that Cassegrainian and Gregorian reflector arrangements are needed for multibeam ground station and satellite antennas. A Cassegrainian arrangement is considered, taking into account aberrations. Dragone (1982) has presented a requirement for the minimization of astigmatism. In the present investigation a formula is presented for describing the deformation coefficients needed to eliminate coma on the basis of a slight deformation of the reflectors. The importance of residual astigmatism due to a derived equation is examined, and attention is given to a compact reflector arrangement which is the result of three optimizations with respect to aberration minimization.

  14. Space Station logistics policy - Risk management from the top down

    NASA Technical Reports Server (NTRS)

    Paules, Granville; Graham, James L., Jr.

    1990-01-01

    Considerations are presented in the area of risk management specifically relating to logistics and system supportability. These considerations form a basis for confident application of concurrent engineering principles to a development program, aiming at simultaneous consideration of support and logistics requirements within the engineering process as the system concept and designs develop. It is shown that, by applying such a process, the chances of minimizing program logistics and supportability risk in the long term can be improved. The problem of analyzing and minimizing integrated logistics risk for the Space Station Freedom Program is discussed.

  15. Lake Wobegon Dice

    ERIC Educational Resources Information Center

    Moraleda, Jorge; Stork, David G.

    2012-01-01

    We introduce Lake Wobegon dice, where each die is "better than the set average." Specifically, these dice have the paradoxical property that on every roll, each die is more likely to roll greater than the set average on the roll, than less than this set average. We also show how to construct minimal optimal Lake Wobegon sets for all "n" [greater…

  16. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  17. Dimensional analysis using toric ideals: primitive invariants.

    PubMed

    Atherton, Mark A; Bates, Ronald A; Wynn, Henry P

    2014-01-01

    Classical dimensional analysis in its original form starts by expressing the units for derived quantities, such as force, in terms of power products of basic units [Formula: see text] etc. This suggests the use of toric ideal theory from algebraic geometry. Within this the Graver basis provides a unique primitive basis in a well-defined sense, which typically has more terms than the standard Buckingham approach. Some textbook examples are revisited and the full set of primitive invariants found. First, a worked example based on convection is introduced to recall the Buckingham method, but using computer algebra to obtain an integer [Formula: see text] matrix from the initial integer [Formula: see text] matrix holding the exponents for the derived quantities. The [Formula: see text] matrix defines the dimensionless variables. But, rather than this integer linear algebra approach it is shown how, by staying with the power product representation, the full set of invariants (dimensionless groups) is obtained directly from the toric ideal defined by [Formula: see text]. One candidate for the set of invariants is a simple basis of the toric ideal. This, although larger than the rank of [Formula: see text], is typically not unique. However, the alternative Graver basis is unique and defines a maximal set of invariants, which are primitive in a simple sense. In addition to the running example four examples are taken from: a windmill, convection, electrodynamics and the hydrogen atom. The method reveals some named invariants. A selection of computer algebra packages is used to show the considerable ease with which both a simple basis and a Graver basis can be found.

  18. 42 CFR 457.700 - Basis, scope, and applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Strategic Planning, Reporting, and Evaluation § 457.700 Basis, scope, and applicability. (a) Statutory basis... strategic planning, reports, and program budgets; and (2) Section 2108 of the Act, which sets forth... strategic planning, monitoring, reporting and evaluation under title XXI. (c) Applicability. The...

  19. 42 CFR 457.700 - Basis, scope, and applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Strategic Planning, Reporting, and Evaluation § 457.700 Basis, scope, and applicability. (a) Statutory basis... strategic planning, reports, and program budgets; and (2) Section 2108 of the Act, which sets forth... strategic planning, monitoring, reporting and evaluation under title XXI. (c) Applicability. The...

  20. 50 CFR 403.04 - Determinations and hearings under section 109(c) of the MMPA.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... management program the state must provide for a process, consistent with section 109(c) of the Act, to... must include the elements set forth below. (b) Basis, purpose, and scope. The process set forth in this... made solely on the basis of the record developed at the hearing. The state agency in making its final...

  1. Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method

    NASA Astrophysics Data System (ADS)

    Lombardini, Richard; Nowara, Ewa; Johnson, Bruce

    2015-03-01

    The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.

  2. Endoscopic Injection of Low Dose Triamcinolone: a Simple, Minimally Invasive and Effective Therapy for Interstitial Cystitis with Hunner Lesions.

    PubMed

    Funaro, Michael G; King, Alexandra N; Stern, Joel N H; Moldwin, Robert M; Bahlani, Sonia

    2018-05-18

    To investigate the efficacy of low dose triamcinolone injection for effectiveness and durability in interstitial cystitis/bladder pain syndrome (IC/BPS) patients with Hunner Lesions (HL). Clinical data from patients with HL who underwent endoscopic submucosal injection of triamcinolone were reviewed: Demographics, pre/post operative pain and nocturia scores, and long-term clinical outcomes were assessed. Duration of response was estimated by time to repeat procedure. Kaplan-Meier estimator was used to evaluate time to repeat procedure. 36 patients who received injections of triamcinolone between 2011 and 2015 were included. Median age±SD of patients was 61.5±12.0 years 23; 28 (77.8%) of patients were female and 8 (22.2%) were male. 26 patients (72.2%) received only 1 set of injections, 8 (22.2%) received 2 sets of injections, and 2 (5.56%) received 3 or more sets of injections. Average time between injections in those receiving more than one set of injections was 344.9 days (median: 313.5, range: 77-714). Pre-procedural pain scores were 8.3±1.2 (mean±SD) on Likert pain scale (0-10), and mean post-procedural pain scores at approximately one month were 3.8±2.2 p<0.001. Mean pre-procedural nocturia bother scores were 7.5±2.0 and mean post-procedural nocturia bother scores were 5.1±2.5) p<0.001. Endoscopic submucosal injection of low dose triamcinolone in IC/BPS patients with HL is an effective and durable adjunct to existing treatment modalities. This approach is associated with low morbidity and can be performed on an outpatient basis. Copyright © 2018. Published by Elsevier Inc.

  3. The effect of shower/bath frequency on the health and operational effectiveness of soldiers in a field setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, L.C.; Daniels, J.I.

    1991-01-31

    Dermal disease can be a significant cause of morbidity among soldiers in a combat setting. For example, among American combat troops in Vietnam, disability from skin disease was one of the single most important medical causes of man-days lost from combat. Currently, the US Army makes shower or bath facilities available to soldiers in the field on a weekly basis. US Army after-action reports and anecdotal descriptions from the field indicate that this may not be an optimal regimen for the maintenance of personal hygiene, especially with respect to diseases of the skin. Determination of the optimal frequency of showeringmore » or bathing for soldiers in an combat setting is complicated by the fact that soldiers in the US Army may be involved in field exercises or combat in many different areas of the world with a variety of climatic conditions. Although certain aspects of the role of environmental factors in the incidence and severity of dermal disease have been documented, the role of hygiene in the potential mitigation of these effects has not been evaluated. The present project entails a comprehensive review and analysis of available literature in order to determine the health impact of shower/bath frequency for soldiers in a combat setting. An integral component of this work is an evaluation of the impact of climate, and microclimate produced by clothing, on the type, frequency, and severity of skin disease. A separate but related area of interest involves evaluating whether the use of antimicrobial soaps or similar products minimize the incidence of skin infections by decreasing populations of disease-causing microorganisms on the skin. 13 refs., 2 figs., 2 tabs.« less

  4. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  5. 48 CFR 16.103 - Negotiating contract type.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES TYPES OF CONTRACTS Selecting Contract Types 16.103 Negotiating contract... basic profit motive of business enterprise, shall be used when the risk involved is minimal or can be...) Contracts on a firm fixed-price basis other than those for major systems or research and development, and (3...

  6. The Neural Basis of Cognitive Control: Response Selection and Inhibition

    ERIC Educational Resources Information Center

    Goghari, Vina M.; MacDonald, Angus W., III

    2009-01-01

    The functional neuroanatomy of tasks that recruit different forms of response selection and inhibition has to our knowledge, never been directly addressed in a single fMRI study using similar stimulus-response paradigms where differences between scanning time and sequence, stimuli, and experimenter instructions were minimized. Twelve right-handed…

  7. Family Life Education Needs of Mentally Disabled Adolescents.

    ERIC Educational Resources Information Center

    Schultz, Jerelyn B.; Adams, Donna U.

    1987-01-01

    Administered 50 needs statements to 134 minimally and mildly mentally disabled adolescent students to identify their family life education needs as a basis for curriculum development. Identified six clusters or groups of family life education needs: Basic Nutrition, Teenage Pregnancy, Sex Education, Developmental Tasks of Adolescents, Marriage and…

  8. CONTINUOUS MICRO-SORTING OF COMPLEX WASTE PLASTICS PARTICLEMIXTURES VIA LIQUID-FLUIDIZED BED CLASSIFICATION (LFBC) FOR WASTE MINIMIZATIONAND RECYCLING

    EPA Science Inventory

    A fundamental investigation is proposed to provide a technical basis for the development of a novel, liquid-fluidized bed classification (LFBC) technology for the continuous separation of complex waste plastic mixtures for in-process recycling and waste minimization. Although ...

  9. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  10. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  11. Evaluation of a School-Based Teen Obesity Prevention Minimal Intervention

    ERIC Educational Resources Information Center

    Abood, Doris A.; Black, David R.; Coster, Daniel C.

    2008-01-01

    Objective: A school-based nutrition education minimal intervention (MI) was evaluated. Design: The design was experimental, with random assignment at the school level. Setting: Seven schools were randomly assigned as experimental, and 7 as delayed-treatment. Participants: The experimental group included 551 teens, and the delayed treatment group…

  12. Introduction [Chapter 1

    Treesearch

    R. C. Musselman; D. G Fox; A. W. Schoettle; C. M. Regan

    1994-01-01

    Wilderness ecosystems in the United States are federally mandated and set aside by the Wilderness Act. They are managed to minimize human impact using methods that leave these systems, to the extent possible, in their natural state uninfluenced by manipulation or disruption by humans. Management often involves controlling or minimizing visual impact by enforcing strict...

  13. Majorization as a Tool for Optimizing a Class of Matrix Functions.

    ERIC Educational Resources Information Center

    Kiers, Henk A.

    1990-01-01

    General algorithms are presented that can be used for optimizing matrix trace functions subject to certain constraints on the parameters. The parameter set that minimizes the majorizing function also decreases the matrix trace function, providing a monotonically convergent algorithm for minimizing the matrix trace function iteratively. (SLD)

  14. Pricing health benefits: a cost-minimization approach.

    PubMed

    Miller, Nolan H

    2005-09-01

    We study the role of health benefits in an employer's compensation strategy, given the overall goal of minimizing total compensation cost (wages plus health-insurance cost). When employees' health status is private information, the employer's basic benefit package consists of a base wage and a moderate health plan, with a generous plan available for an additional charge. We show that in setting the charge for the generous plan, a cost-minimizing employer should act as a monopolist who sells "health plan upgrades" to its workers, and we discuss ways tax policy can encourage efficiency under cost-minimization and alternative pricing rules.

  15. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  16. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  17. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  18. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  19. 2016 American College of Rheumatology/European League Against Rheumatism criteria for minimal, moderate, and major clinical response in adult dermatomyositis and polymyositis: An International Myositis Assessment and Clinical Studies Group/Paediatric Rheumatology International Trials Organisation Collaborative Initiative.

    PubMed

    Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri

    2017-05-01

    To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  20. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

Top