Rossi, Tuomas P. Sakko, Arto; Puska, Martti J.; Lehtola, Susi; Nieminen, Risto M.
2015-03-07
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate that the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.
Rossi, Tuomas P; Lehtola, Susi; Sakko, Arto; Puska, Martti J; Nieminen, Risto M
2015-03-01
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate that the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond. PMID:25747068
Property-optimized Gaussian basis sets for molecular response calculations
NASA Astrophysics Data System (ADS)
Rappoport, Dmitrij; Furche, Filipp
2010-10-01
With recent advances in electronic structure methods, first-principles calculations of electronic response properties, such as linear and nonlinear polarizabilities, have become possible for molecules with more than 100 atoms. Basis set incompleteness is typically the main source of error in such calculations since traditional diffuse augmented basis sets are too costly to use or suffer from near linear dependence. To address this problem, we construct the first comprehensive set of property-optimized augmented basis sets for elements H-Rn except lanthanides. The new basis sets build on the Karlsruhe segmented contracted basis sets of split-valence to quadruple-zeta valence quality and add a small number of moderately diffuse basis functions. The exponents are determined variationally by maximization of atomic Hartree-Fock polarizabilities using analytical derivative methods. The performance of the resulting basis sets is assessed using a set of 313 molecular static Hartree-Fock polarizabilities. The mean absolute basis set errors are 3.6%, 1.1%, and 0.3% for property-optimized basis sets of split-valence, triple-zeta, and quadruple-zeta valence quality, respectively. Density functional and second-order Møller-Plesset polarizabilities show similar basis set convergence. We demonstrate the efficiency of our basis sets by computing static polarizabilities of icosahedral fullerenes up to C720 using hybrid density functional theory.
Unambiguous optimization of effective potentials in finite basis sets.
Jacob, Christoph R
2011-12-28
The optimization of effective potentials is of interest in density-functional theory (DFT) in two closely related contexts. First, the evaluation of the functional derivative of orbital-dependent exchange-correlation functionals requires the application of optimized effective potential methods. Second, the optimization of the effective local potential that yields a given electron density is important both for the development of improved approximate functionals and for the practical application of embedding schemes based on DFT. However, in all cases this optimization turns into an ill-posed problem if a finite basis set is introduced for the Kohn-Sham orbitals. So far, this problem has not been solved satisfactorily. Here, a new approach to overcome the ill-posed nature of such finite-basis set methods is presented for the optimization of the effective local potential that yields a given electron density. This new scheme can be applied with orbital basis sets of reasonable size and makes it possible to vary the basis sets for the orbitals and for the potential independently, while providing an unambiguous potential that systematically approaches the numerical reference.
Gidofalvi, Gergely; Mazziotti, David A
2014-01-16
Molecule-optimized basis sets, based on approximate natural orbitals, are developed for accelerating the convergence of quantum calculations with strongly correlated (multireferenced) electrons. We use a low-cost approximate solution of the anti-Hermitian contracted Schrödinger equation (ACSE) for the one- and two-electron reduced density matrices (RDMs) to generate an approximate set of natural orbitals for strongly correlated quantum systems. The natural-orbital basis set is truncated to generate a molecule-optimized basis set whose rank matches that of a standard correlation-consistent basis set optimized for the atoms. We show that basis-set truncation by approximate natural orbitals can be viewed as a one-electron unitary transformation of the Hamiltonian operator and suggest an extension of approximate natural-orbital truncations through two-electron unitary transformations of the Hamiltonian operator, such as those employed in the solution of the ACSE. The molecule-optimized basis set from the ACSE improves the accuracy of the equivalent standard atom-optimized basis set at little additional computational cost. We illustrate the method with the potential energy curves of hydrogen fluoride and diatomic nitrogen. Relative to the hydrogen fluoride potential energy curve from the ACSE in a polarized triple-ζ basis set, the ACSE curve in a molecule-optimized basis set, equivalent in size to a polarized double-ζ basis, has a nonparallelity error of 0.0154 au, which is significantly better than the nonparallelity error of 0.0252 au from the polarized double-ζ basis set.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
NASA Astrophysics Data System (ADS)
Gidopoulos, Nikitas I.; Lathiotakis, Nektarios N.
2013-10-01
The Comment by Friedrich does not dispute the central result of our paper [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.052508 85, 052508 (2012)] that nonanalytic behavior is present in long-established mathematical pathologies arising in the solution of finite basis optimized effective potential (OEP) equations. In the Comment, the terms “balancing of basis sets” and “basis-set convergence” imply a particular order towards the limit of a large orbital basis sets where the large-orbital-base limit is always taken first, before the large-auxiliary-base limit, until overall convergence is achieved, at a high computational cost. The authors claim that, on physical grounds, this order of limits is not only sufficient, but also necessary in order to avoid the mathematical pathologies. In response to the Comment, we remark that it is already written in our paper that the nonanalyticity trivially disappears with large orbital basis sets. We point out that the authors of the Comment give an incorrect proof of this statement. We also show that the order of limits towards convergence of the potential is immaterial. A recent paper by the authors of the Comment proposes a partial correction for the incomplete orbital basis error in the full-potential linearized augmented-plane-wave method. Similar to the correction developed in our paper, this correction also benefits from an effectively complete orbital basis, even though only a finite orbital basis is employed in the calculation. This shows that it is unnecessary to take, in practice, the limit of an infinite orbital basis in order to avoid mathematical pathologies in the OEP. Our paper is a significant contribution in that direction with general applicability to any choice of basis sets. Finally, contrary to an allusion in the abstract and assertions in the main text of the Comment that unphysical oscillations of the OEP are supposedly attributed to the common energy denominator approximation, in fact, such
NASA Astrophysics Data System (ADS)
Evarestov, R. A.; Panin, A. I.; Bandura, A. V.; Losev, M. V.
2008-06-01
The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U2N3 and UN2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U2N3 crystals; UN2 crystal has the semiconducting nature.
NASA Astrophysics Data System (ADS)
Rocca, Dario
2014-05-01
A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.
Rocca, Dario
2014-05-14
A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 500 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
Peverati, Roberto; Baldridge, Kim K
2009-10-13
The implementation, optimization, and performance of DFT-D, including the effects of solvation, has been tested on applications of polar processes in solution, where dispersion and hydrogen bonding is known to be involved. Solvent effects are included using our ab initio continuum solvation strategy, COSab, a conductor-like continuum solvation model, modified for ab initio in the quantum chemistry program GAMESS. Structure and properties are investigated across various functionals to evaluate their ability to properly model dispersion and solvation effects. The commonly used S22 set with accurate interaction energies of organic complexes has been used for parametrization studies of dispersion parameters and relevant solvation parameters. Dunning's correlation consistent basis sets, cc-pVnZ (n = D, T), are used in the optimization, together with the Grimme B97-D exchange-correlation functional. Both water (ε = 78.4) and ether (ε = 4.33) environments are considered. Optimized semiempirical dispersion correction parameters and solvent extent radii are proposed for several functionals. We find that special parametrization of the semiempirical dispersion correction when used together in the DFT-D/COSab approach is not necessary. The global performance is quite acceptable in terms of chemical accuracy and suggests that this approach is a reliable as well as economical method for evaluation of solvent effects in systems with dispersive interactions. The resulting theory is applied to a group of push-pull pyrrole systems to illustrate the effects of donor/acceptor and solvation on their conformational and energetic properties.
Goerigk, Lars; Collyer, Charles A; Reimers, Jeffrey R
2014-12-18
We demonstrate the importance of properly accounting for London dispersion and basis-set-superposition error (BSSE) in quantum-chemical optimizations of protein structures, factors that are often still neglected in contemporary applications. We optimize a portion of an ensemble of conformationally flexible lysozyme structures obtained from highly accurate X-ray crystallography data that serve as a reliable benchmark. We not only analyze root-mean-square deviations from the experimental Cartesian coordinates, but also, for the first time, demonstrate how London dispersion and BSSE influence crystallographic R factors. Our conclusions parallel recent recommendations for the optimization of small gas-phase peptide structures made by some of the present authors: Hartree-Fock theory extended with Grimme's recent dispersion and BSSE corrections (HF-D3-gCP) is superior to popular density functional theory (DFT) approaches. Not only are statistical errors on average lower with HF-D3-gCP, but also the convergence behavior is much better. In particular, we show that the BP86/6-31G* approach should not be relied upon as a black-box method, despite its widespread use, as its success is based on an unpredictable cancellation of errors. Using HF-D3-gCP is technically straightforward, and we therefore encourage users of quantum-chemical methods to adopt this approach in future applications.
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Spackman, Peter R.; Karton, Amir
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
High quality Gaussian basis sets for fourth-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry; Faegri, Knut, Jr.
1992-01-01
Energy optimized Gaussian basis sets of triple-zeta quality for the atoms Rb-Xe have been derived. Two series of basis sets are developed: (24s 16p 10d) and (26s 16p 10d) sets which were expanded to 13d and 19p functions as the 4d and 5p shells become occupied. For the atoms lighter than Cd, the (24s 16p 10d) sets with triple-zeta valence distributions are higher in energy than the corresponding double-zeta distribution. To ensure a triple-zeta distribution and a global energy minimum, the (26s 16p 10d) sets were derived. Total atomic energies from the largest basis sets are between 198 and 284 (mu)E(sub H) above the numerical Hartree-Fock energies.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Many-Body Basis Set Superposition Effect.
Ouyang, John F; Bettens, Ryan P A
2015-11-10
The basis set superposition effect (BSSE) arises in electronic structure calculations of molecular clusters when questions relating to interactions between monomers within the larger cluster are asked. The binding energy, or total energy, of the cluster may be broken down into many smaller subcluster calculations and the energies of these subsystems linearly combined to, hopefully, produce the desired quantity of interest. Unfortunately, BSSE can plague these smaller fragment calculations. In this work, we carefully examine the major sources of error associated with reproducing the binding energy and total energy of a molecular cluster. In order to do so, we decompose these energies in terms of a many-body expansion (MBE), where a "body" here refers to the monomers that make up the cluster. In our analysis, we found it necessary to introduce something we designate here as a many-ghost many-body expansion (MGMBE). The work presented here produces some surprising results, but perhaps the most significant of all is that BSSE effects up to the order of truncation in a MBE of the total energy cancel exactly. In the case of the binding energy, the only BSSE correction terms remaining arise from the removal of the one-body monomer total energies. Nevertheless, our earlier work indicated that BSSE effects continued to remain in the total energy of the cluster up to very high truncation order in the MBE. We show in this work that the vast majority of these high-order many-body effects arise from BSSE associated with the one-body monomer total energies. Also, we found that, remarkably, the complete basis set limit values for the three-body and four-body interactions differed very little from that at the MP2/aug-cc-pVDZ level for the respective subclusters embedded within a larger cluster. PMID:26574311
Chopped random-basis quantum optimization
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Gravitational Lens Modeling with Basis Sets
NASA Astrophysics Data System (ADS)
Birrer, Simon; Amara, Adam; Refregier, Alexandre
2015-11-01
We present a strong lensing modeling technique based on versatile basis sets for the lens and source planes. Our method uses high performance Monte Carlo algorithms, allows for an adaptive build up of complexity, and bridges the gap between parametric and pixel based reconstruction methods. We apply our method to a Hubble Space Telescope image of the strong lens system RX J1131-1231 and show that our method finds a reliable solution and is able to detect substructure in the lens and source planes simultaneously. Using mock data, we show that our method is sensitive to sub-clumps with masses four orders of magnitude smaller than the main lens, which corresponds to about {10}8{M}⊙ , without prior knowledge of the position and mass of the sub-clump. The modeling approach is flexible and maximizes automation to facilitate the analysis of the large number of strong lensing systems expected in upcoming wide field surveys. The resulting search for dark sub-clumps in these systems, without mass-to-light priors, offers promise for probing physics beyond the standard model in the dark matter sector.
Correlation consistent basis sets for the atoms In–Xe
Mahler, Andrew; Wilson, Angela K.
2015-02-28
In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.
QUALITY: A program to assess basis set quality
NASA Astrophysics Data System (ADS)
Sordo, J. A.
1998-09-01
A program to analyze in detail the quality of basis sets is presented. The information provided by the application of a wide variety of (atomic and/or molecular) quality criteria is processed by using a methodology that allows one to determine the most appropriate quality test to select a basis set to compute a given (atomic or molecular) property. Fuzzy set theory is used to choose the most adequate basis set to compute simultaneously a set of properties.
Tests for Wavelets as a Basis Set
NASA Astrophysics Data System (ADS)
Baker, Thomas; Evenbly, Glen; White, Steven
A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.
Basis Set Exchange: A Community Database for Computational Sciences
Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd O.; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared M.; Li, Jun; Windus, Theresa L.
2007-05-01
Basis sets are one of the most important input data for computational models in the chemistry, materials, biology and other science domains that utilize computational quantum mechanics methods. Providing a shared, web accessible environment where researchers can not only download basis sets in their required format, but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.
Optimality criteria: A basis for multidisciplinary design optimization
NASA Astrophysics Data System (ADS)
Venkayya, V. B.
1989-01-01
This paper presents a generalization of what is frequently referred to in the literature as the optimality criteria approach in structural optimization. This generalization includes a unified presentation of the optimality conditions, the Lagrangian multipliers, and the resizing and scaling algorithms in terms of the sensitivity derivatives of the constraint and objective functions. The by-product of this generalization is the derivation of a set of simple nondimensional parameters which provides significant insight into the behavior of the structure as well as the optimization algorithm. A number of important issues, such as, active and passive variables, constraints and three types of linking are discussed in the context of the present derivation of the optimality criteria approach. The formulation as presented in this paper brings multidisciplinary optimization within the purview of this extremely efficient optimality criteria approach.
Near Hartree-Fock quality GTO basis sets for the second-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry
1987-01-01
Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.
The ORP basis set designed for optical rotation calculations.
Baranowska-Łączkowska, Angelika; Łączkowski, Krzysztof Z
2013-09-01
Details of generation of the optical rotation prediction (ORP) basis set developed for accurate optical rotation (OR) calculations are presented. Specific rotation calculations carried out at the density functional theory (DFT) level for model chiral methane molecule, fluorooxirane, methyloxirane, and dimethylmethylenecyclopropane reveal that the ORP set outperforms larger basis sets, among them the aug-cc-pVTZ basis set of Dunning (J. Chem. Phys. 1989, 90, 1007) and the aug-pc-2 basis set of Jensen (J. Chem. Phys. 2002, 117, 9234; J. Chem. Theory Comput. 2008, 4, 719). It is shown to be an attractive choice also in the case of larger systems, namely norbornanone, β-pinene, trans-pinane, and nopinone. The ORP basis set is further used in OR calculations for 24 other systems, and the results are compared to the aug-cc-pVDZ values. Whenever large discrepancies of results are observed, the ORP values are in an excellent agreement with the aug-cc-pVTZ results. The ORP basis set enables accurate specific rotation calculations at a reduced cost and thus can be recommended for routine DFT OR calculations, also for large and conformationally flexible molecules.
Hill, J Grant
2011-07-28
Auxiliary basis sets specifically matched to the correlation consistent cc-pVnZ-PP, cc-pwCVnZ-PP, aug-cc-pVnZ-PP, and aug-cc-pwCVnZ-PP orbital basis sets (used in conjunction with pseudopotentials) for the 5d transition metal elements Hf-Pt have been optimized for use in density fitting second-order Møller-Plesset perturbation theory and other correlated ab initio methods. Calculations of the second-order Møller-Plesset perturbation theory correlation energy, for a test set of small to medium sized molecules, indicate that the density fitting error when utilizing these sets is negligible at three to four orders of magnitude smaller than the orbital basis set incompleteness error.
Hill, J. Grant E-mail: kipeters@wsu.edu; Peterson, Kirk A. E-mail: kipeters@wsu.edu
2014-09-07
New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of the spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Point Set Denoising Using Bootstrap-Based Radial Basis Function
Ramli, Ahmad; Abd. Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105
Basis-set extensions for two-component spin-orbit treatments of heavy elements.
Armbruster, Markus K; Klopper, Wim; Weigend, Florian
2006-11-14
The accuracy of standard basis sets of quadruple-zeta and lower quality for the use in two-component self-consistent field procedures including spin-orbit coupling is investigated for the elements In-I and Au-At. Spin-orbit coupling leads to energetic and spatial splittings of inner shells, which are not described accurately with standard basis sets optimized for scalar relativistic calculations. This results in large errors in total atomic energies and significant errors in atomization energies of compounds containing these atoms. We show how these errors can be corrected by adding just a few steep sets of basis functions and demonstrate the quality of the resulting extended basis sets. PMID:17066175
Plumley, Joshua A; Dannenberg, J J
2011-06-01
We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP.
Full Waveform Inversion with Optimal Basis Functions
NASA Astrophysics Data System (ADS)
Sun, Gang; Chang, Qianshun; Sheng, Ping
2003-03-01
Based on the approach suggested by Tarantola, and Gauthier etal., we show that the alternate use of the step (linear) function basis and the block function (quasi-δ function) basis can give accurate full waveform inversion results for the layered acoustic systems, starting from a uniform background. Our method is robust against additive white noise (up to 20% of the signal) and can resolve layers that are comparable to or smaller than a wavelength in thickness. The physical reason for the success of our approach is illustrated through a simple example.
Full waveform inversion with optimal basis functions.
Sun, Gang; Chang, Qianshun; Sheng, Ping
2003-03-14
Based on the approach suggested by Tarantola, and Gauthier et al., we show that the alternate use of the step (linear) function basis and the block function (quasi-delta function) basis can give accurate full waveform inversion results for the layered acoustic systems, starting from a uniform background. Our method is robust against additive white noise (up to 20% of the signal) and can resolve layers that are comparable to or smaller than a wavelength in thickness. The physical reason for the success of our approach is illustrated through a simple example.
Informatics-Based Energy Fitting Scheme for Correlation Energy at Complete Basis Set Limit.
Seino, Junji; Nakai, Hiromi
2016-09-30
Energy fitting schemes based on informatics techniques using hierarchical basis sets with small cardinal numbers were numerically investigated to estimate correlation energies at the complete basis set limits. Numerical validations confirmed that the conventional two-point extrapolation models can be unified into a simple formula with optimal parameters obtained by the same test sets. The extrapolation model was extended to two-point fitting models by a relaxation of the relationship between the extrapolation coefficients or a change of the fitting formula. Furthermore, n-scheme fitting models were developed by the combinations of results calculated at several theory levels and basis sets to compensate for the deficiencies in the fitting model at one level of theory. Systematic assessments on the Gaussian-3X and Gaussian-2 sets revealed that the fitting models drastically reduced errors with equal or smaller computational effort. © 2016 Wiley Periodicals, Inc. PMID:27454327
The Neural Basis of Optimism and Pessimism
2013-01-01
Our survival and wellness require a balance between optimism and pessimism. Undue pessimism makes life miserable; however, excessive optimism can lead to dangerously risky behaviors. A review and synthesis of the literature on the neurophysiology subserving these two worldviews suggests that optimism and pessimism are differentially associated with the two cerebral hemispheres. High self-esteem, a cheerful attitude that tends to look at the positive aspects of a given situation, as well as an optimistic belief in a bright future are associated with physiological activity in the left-hemisphere (LH). In contrast, a gloomy viewpoint, an inclination to focus on the negative part and exaggerate its significance, low self-esteem as well as a pessimistic view on what the future holds are interlinked with neurophysiological processes in the right-hemisphere (RH). This hemispheric asymmetry in mediating optimistic and pessimistic outlooks is rooted in several biological and functional differences between the two hemispheres. The RH mediation of a watchful and inhibitive mode weaves a sense of insecurity that generates and supports pessimistic thought patterns. Conversely, the LH mediation of an active mode and the positive feedback it receives through its motor dexterity breed a sense of confidence in one's ability to manage life's challenges, and optimism about the future. PMID:24167413
Scar Functions, Barriers for Chemical Reactivity, and Vibrational Basis Sets.
Revuelta, F; Vergini, E; Benito, R M; Borondo, F
2016-07-14
The performance of a recently proposed method to efficiently calculate scar functions is analyzed in problems of chemical interest. An application to the computation of wave functions associated with barriers relevant for the LiNC ⇄ LiCN isomerization reaction is presented as an illustration. These scar functions also constitute excellent elements for basis sets suitable for quantum calculation of vibrational energy levels. To illustrate their efficiency, a calculation of the LiNC/LiCN eigenfunctions is also presented.
Spackman, Peter R; Jayatilaka, Dylan; Karton, Amir
2016-09-14
We examine the basis set convergence of the CCSD(T) method for obtaining the structures of the 108 neutral first- and second-row species in the W4-11 database (with up to five non-hydrogen atoms). This set includes a total of 181 unique bonds: 75 H-X, 49 X-Y, 43 X=Y, and 14 X≡Y bonds (where X and Y are first- and second-row atoms). As reference values, geometries optimized at the CCSD(T)/aug'-cc-pV(6+d)Z level of theory are used. We consider the basis set convergence of the CCSD(T) method with the correlation consistent basis sets cc-pV(n+d)Z and aug'-cc-pV(n+d)Z (n = D, T, Q, 5) and the Weigend-Ahlrichs def2-n ZVPP basis sets (n = T, Q). For each increase in the highest angular momentum present in the basis set, the root-mean-square deviation (RMSD) over the bond distances is decreased by a factor of ∼4. For example, the following RMSDs are obtained for the cc-pV(n+d)Z basis sets 0.0196 (D), 0.0050 (T), 0.0015 (Q), and 0.0004 (5) Å. Similar results are obtained for the aug'-cc-pV(n+d)Z and def2-n ZVPP basis sets. The double-zeta and triple-zeta quality basis sets systematically and significantly overestimate the bond distances. A simple and cost-effective way to improve the performance of these basis sets is to scale the bond distances by an empirical scaling factor of 0.9865 (cc-pV(D+d)Z) and 0.9969 (cc-pV(T+d)Z). This results in RMSDs of 0.0080 (scaled cc-pV(D+d)Z) and 0.0029 (scaled cc-pV(T+d)Z) Å. The basis set convergence of larger basis sets can be accelerated via standard basis-set extrapolations. In addition, the basis set convergence of explicitly correlated CCSD(T)-F12 calculations is investigated in conjunction with the cc-pVnZ-F12 basis sets (n = D, T). Typically, one "gains" two angular momenta in the explicitly correlated calculations. That is, the CCSD(T)-F12/cc-pVnZ-F12 level of theory shows similar performance to the CCSD(T)/cc-pV(n+2)Z level of theory. In particular, the following RMSDs are obtained for the cc-pVnZ-F12 basis sets 0.0019 (D
Spackman, Peter R; Jayatilaka, Dylan; Karton, Amir
2016-09-14
We examine the basis set convergence of the CCSD(T) method for obtaining the structures of the 108 neutral first- and second-row species in the W4-11 database (with up to five non-hydrogen atoms). This set includes a total of 181 unique bonds: 75 H-X, 49 X-Y, 43 X=Y, and 14 X≡Y bonds (where X and Y are first- and second-row atoms). As reference values, geometries optimized at the CCSD(T)/aug'-cc-pV(6+d)Z level of theory are used. We consider the basis set convergence of the CCSD(T) method with the correlation consistent basis sets cc-pV(n+d)Z and aug'-cc-pV(n+d)Z (n = D, T, Q, 5) and the Weigend-Ahlrichs def2-n ZVPP basis sets (n = T, Q). For each increase in the highest angular momentum present in the basis set, the root-mean-square deviation (RMSD) over the bond distances is decreased by a factor of ∼4. For example, the following RMSDs are obtained for the cc-pV(n+d)Z basis sets 0.0196 (D), 0.0050 (T), 0.0015 (Q), and 0.0004 (5) Å. Similar results are obtained for the aug'-cc-pV(n+d)Z and def2-n ZVPP basis sets. The double-zeta and triple-zeta quality basis sets systematically and significantly overestimate the bond distances. A simple and cost-effective way to improve the performance of these basis sets is to scale the bond distances by an empirical scaling factor of 0.9865 (cc-pV(D+d)Z) and 0.9969 (cc-pV(T+d)Z). This results in RMSDs of 0.0080 (scaled cc-pV(D+d)Z) and 0.0029 (scaled cc-pV(T+d)Z) Å. The basis set convergence of larger basis sets can be accelerated via standard basis-set extrapolations. In addition, the basis set convergence of explicitly correlated CCSD(T)-F12 calculations is investigated in conjunction with the cc-pVnZ-F12 basis sets (n = D, T). Typically, one "gains" two angular momenta in the explicitly correlated calculations. That is, the CCSD(T)-F12/cc-pVnZ-F12 level of theory shows similar performance to the CCSD(T)/cc-pV(n+2)Z level of theory. In particular, the following RMSDs are obtained for the cc-pVnZ-F12 basis sets 0.0019 (D
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Jayatilaka, Dylan; Karton, Amir
2016-09-01
We examine the basis set convergence of the CCSD(T) method for obtaining the structures of the 108 neutral first- and second-row species in the W4-11 database (with up to five non-hydrogen atoms). This set includes a total of 181 unique bonds: 75 H—X, 49 X—Y, 43 X=Y, and 14 X≡Y bonds (where X and Y are first- and second-row atoms). As reference values, geometries optimized at the CCSD(T)/aug'-cc-pV(6+d)Z level of theory are used. We consider the basis set convergence of the CCSD(T) method with the correlation consistent basis sets cc-pV(n+d)Z and aug'-cc-pV(n+d)Z (n = D, T, Q, 5) and the Weigend-Ahlrichs def2-n ZVPP basis sets (n = T, Q). For each increase in the highest angular momentum present in the basis set, the root-mean-square deviation (RMSD) over the bond distances is decreased by a factor of ˜4. For example, the following RMSDs are obtained for the cc-pV(n+d)Z basis sets 0.0196 (D), 0.0050 (T), 0.0015 (Q), and 0.0004 (5) Å. Similar results are obtained for the aug'-cc-pV(n+d)Z and def2-n ZVPP basis sets. The double-zeta and triple-zeta quality basis sets systematically and significantly overestimate the bond distances. A simple and cost-effective way to improve the performance of these basis sets is to scale the bond distances by an empirical scaling factor of 0.9865 (cc-pV(D+d)Z) and 0.9969 (cc-pV(T+d)Z). This results in RMSDs of 0.0080 (scaled cc-pV(D+d)Z) and 0.0029 (scaled cc-pV(T+d)Z) Å. The basis set convergence of larger basis sets can be accelerated via standard basis-set extrapolations. In addition, the basis set convergence of explicitly correlated CCSD(T)-F12 calculations is investigated in conjunction with the cc-pVnZ-F12 basis sets (n = D, T). Typically, one "gains" two angular momenta in the explicitly correlated calculations. That is, the CCSD(T)-F12/cc-pVnZ-F12 level of theory shows similar performance to the CCSD(T)/cc-pV(n+2)Z level of theory. In particular, the following RMSDs are obtained for the cc-pVnZ-F12 basis sets 0
Auxiliary Basis Sets for Density Fitting in Explicitly Correlated Calculations: The Atoms H-Ar.
Kritikou, Stella; Hill, J Grant
2015-11-10
Auxiliary basis sets specifically matched to the correlation consistent cc-pVnZ-F12 and cc-pCVnZ-F12 orbital basis sets for the elements H-Ar have been optimized at the density-fitted second-order Møller-Plesset perturbation theory level of theory for use in explicitly correlated (F12) methods, which utilize density fitting for the evaluation of two-electron integrals. Calculations of the correlation energy for a test set of small to medium sized molecules indicate that the density fitting error when using these auxiliary sets is 2 to 3 orders of magnitude smaller than the F12 orbital basis set incompleteness error. The error introduced by the use of these fitting sets within the resolution-of-the-identity approximation of the many-electron integrals arising in F12 theory has also been assessed and is demonstrated to be negligible and well-controlled. General guidelines are proposed for the optimization of density fitting auxiliary basis sets for use with F12 methods for other elements.
Spin-Polarized Nonadiabatic Dynamics with Local Basis Sets
NASA Astrophysics Data System (ADS)
Hoyt, Robert; Kolesov, Grigory; Tritsaris, Georgios; Granas, Oscar; Efthimios, Efthimios
Accurate simulations of electron transfer at the solid-electrolyte interphase (SEI) are critical for understanding and predicting electrochemical reactions. Density-Functional Theory (DFT) has been widely applied to study the ground-state structure of novel materials for electrochemical energy storage and for adiabatic molecular dynamics. Unfortunately, many chemical reactions take place on femtosecond time scales where assuming adiabatic electron density propagation is invalid. To resolve this, we have developed the ability to perform time-dependent DFT calculations using nonadiabatic propagation of the electron density along with Ehrenfest dynamics for the ions to better capture the complex interactions that occur during surface-electrolyte electron transfer and chemical reactions. Spin polarization is also implemented to improve the accuracy of the simulations and their suitability for studying a wider range of chemical reactions. Local basis sets are implemented via linear combinations of atomic orbitals to reduce the size of the DFT basis for computational efficiency.
Coupled-cluster based basis sets for valence correlation calculations
NASA Astrophysics Data System (ADS)
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.
2016-03-01
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via
Coupled-cluster based basis sets for valence correlation calculations.
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J
2016-03-14
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.
Coupled-cluster based basis sets for valence correlation calculations.
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J
2016-03-14
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers. PMID:26979680
Scar Functions, Barriers for Chemical Reactivity, and Vibrational Basis Sets.
Revuelta, F; Vergini, E; Benito, R M; Borondo, F
2016-07-14
The performance of a recently proposed method to efficiently calculate scar functions is analyzed in problems of chemical interest. An application to the computation of wave functions associated with barriers relevant for the LiNC ⇄ LiCN isomerization reaction is presented as an illustration. These scar functions also constitute excellent elements for basis sets suitable for quantum calculation of vibrational energy levels. To illustrate their efficiency, a calculation of the LiNC/LiCN eigenfunctions is also presented. PMID:26905100
Hellweg, Arnim; Rappoport, Dmitrij
2015-01-14
We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.
Roscioni, Otello M; Lee, Edmond P F; Dyke, John M
2012-10-01
We present a set of effective core potential (ECP) basis sets for rhodium atoms which are of reasonable size for use in electronic structure calculations. In these ECP basis sets, the Los Alamos ECP is used to simulate the effect of the core electrons while an optimized set of Gaussian functions, which includes polarization and diffuse functions, is used to describe the valence electrons. These basis sets were optimized to reproduce the ionization energy and electron affinity of atomic rhodium. They were also tested by computing the electronic ground state geometry and harmonic frequencies of [Rh(CO)(2) μ-Cl](2) , Rh(CO)(2) ClPy, and RhCO (neutral and its positive, and negative ions) as well as the enthalpy of the reaction of [Rh(CO)(2) μ-Cl](2) with pyridine (Py) to give Rh(CO)(2) ClPy, at different levels of theory. Good agreement with experimental values was obtained. Although the number of basis functions used in our ECP basis sets is smaller than those of other ECP basis sets of comparable quality, we show that the newly developed ECP basis sets provide the flexibility and precision required to reproduce a wide range of chemical and physical properties of rhodium compounds. Therefore, we recommend the use of these compact yet accurate ECP basis sets for electronic structure calculations on molecules involving rhodium atoms.
Tesch, Carmen M; de Vivie-Riedle, Regina
2004-12-22
The phase of quantum gates is one key issue for the implementation of quantum algorithms. In this paper we first investigate the phase evolution of global molecular quantum gates, which are realized by optimally shaped femtosecond laser pulses. The specific laser fields are calculated using the multitarget optimal control algorithm, our modification of the optimal control theory relevant for application in quantum computing. As qubit system we use vibrational modes of polyatomic molecules, here the two IR-active modes of acetylene. Exemplarily, we present our results for a Pi gate, which shows a strong dependence on the phase, leading to a significant decrease in quantum yield. To correct for this unwanted behavior we include pressure on the quantum phase in our multitarget approach. In addition the accuracy of these phase corrected global quantum gates is enhanced. Furthermore we could show that in our molecular approach phase corrected quantum gates and basis set independence are directly linked. Basis set independence is also another property highly required for the performance of quantum algorithms. By realizing the Deutsch-Jozsa algorithm in our two qubit molecular model system, we demonstrate the good performance of our phase corrected and basis set independent quantum gates.
Optimal Piecewise Linear Basis Functions in Two Dimensions
Brooks III, E D; Szoke, A
2009-01-26
We use a variational approach to optimize the center point coefficients associated with the piecewise linear basis functions introduced by Stone and Adams [1], for polygonal zones in two Cartesian dimensions. Our strategy provides optimal center point coefficients, as a function of the location of the center point, by minimizing the error induced when the basis function interpolation is used for the solution of the time independent diffusion equation within the polygonal zone. By using optimal center point coefficients, one expects to minimize the errors that occur when these basis functions are used to discretize diffusion equations, or transport equations in optically thick zones (where they approach the solution of the diffusion equation). Our optimal center point coefficients satisfy the requirements placed upon the basis functions for any location of the center point. We also find that the location of the center point can be optimized, but this requires numerical calculations. Curiously, the optimum center point location is independent of the values of the dependent variable on the corners only for quadrilaterals.
NASA Astrophysics Data System (ADS)
Abrams, Micah L.; Sherrill, C. David
2003-01-01
We compare several standard polarized double-zeta basis sets for use in full configuration interaction benchmark computations. The 6-31G**, DZP, cc-pVDZ, and Widmark-Malmqvist-Roos atomic natural orbital (ANO) basis sets are assessed on the basis of their ability to provide accurate full configuration interaction spectroscopic constants for several small molecules. Even though highly correlated methods work best with larger basis sets, predicted spectroscopic constants are in good agreement with experiment; bond lengths and harmonic vibrational frequencies have average absolute errors no larger than 0.017 Å and 1.6%, respectively, for all but the ANO basis. For the molecules considered, 6-31G** gives the smallest average errors, while the ANO basis set gives the largest. The use of variationally optimized basis sets and natural orbitals are also explored for improved benchmarking. Although optimized basis sets do not always improve predictions of molecular properties, taking a DZP-sized subset of the natural orbitals from a singles and doubles configuration interaction computation in a larger basis significantly improves results.
NASA Astrophysics Data System (ADS)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Mao, Yuezhi; Horn, Paul R; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals. PMID:27475350
Mao, Yuezhi; Horn, Paul R; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Basis set expansion for inverse problems in plasma diagnostic analysis.
Jones, B; Ruiz, C L
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
NASA Astrophysics Data System (ADS)
Nikolaev, A. V.; Lamoen, D.; Partoens, B.
2016-07-01
In order to increase the accuracy of the linearized augmented plane wave (LAPW) method, we present a new approach where the plane wave basis function is augmented by two different atomic radial components constructed at two different linearization energies corresponding to two different electron bands (or energy windows). We demonstrate that this case can be reduced to the standard treatment within the LAPW paradigm where the usual basis set is enriched by the basis functions of the tight binding type, which go to zero with zero derivative at the sphere boundary. We show that the task is closely related with the problem of extended core states which is currently solved by applying the LAPW method with local orbitals (LAPW+LO). In comparison with LAPW+LO, the number of supplemented basis functions in our approach is doubled, which opens up a new channel for the extension of the LAPW and LAPW+LO basis sets. The appearance of new supplemented basis functions absent in the LAPW+LO treatment is closely related with the existence of the u ˙ l -component in the canonical LAPW method. We discuss properties of additional tight binding basis functions and apply the extended basis set for computation of electron energy bands of lanthanum (face and body centered structures) and hexagonal close packed lattice of cadmium. We demonstrate that the new treatment gives lower total energies in comparison with both canonical LAPW and LAPW+LO, with the energy difference more pronounced for intermediate and poor LAPW basis sets.
Is there an optimal basis to maximise optical information transfer?
Chen, Mingzhou; Dholakia, Kishan; Mazilu, Michael
2016-01-01
We establish the concept of the density of the optical degrees of freedom that may be applied to any photonics based system. As a key example of this versatile approach we explore information transfer using optical communication. We demonstrate both experimentally, theoretically and numerically that the use of a basis set with fields containing optical vortices does not increase the telecommunication capacity of an optical system. PMID:26976626
Determining an optimal set of research experiments
NASA Technical Reports Server (NTRS)
Adams, B. H.; Gearing, C. E.
1974-01-01
Description of a procedure for optimal selection of research experiments to be performed aboard the Space Shuttle. The procedure is designed to provide the study team with a credible approach to their task. The procedure is characterized as methodologically sound and based on assumptions which reasonably approximate the real conditions. The data-gathering techniques proposed are accepted by scientifically trained personnel.
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-07
We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for D{sub e} = − 16.1 ± 0.1 kcal/mol and for D{sub 0} = − 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of −14.22 ± 0.12 kcal/mol.
Miliordos, Evangelos; Xantheas, Sotiris S
2015-03-01
We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.
The study of basis sets for the calculation of the structure and dynamics of the benzene-Kr complex
Shirkov, Leonid; Makarewicz, Jan
2015-05-28
An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowed us to design an optimal basis set composed of a small Dunning’s basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results.
The study of basis sets for the calculation of the structure and dynamics of the benzene-Kr complex.
Shirkov, Leonid; Makarewicz, Jan
2015-05-28
An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowed us to design an optimal basis set composed of a small Dunning's basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results.
The study of basis sets for the calculation of the structure and dynamics of the benzene-Kr complex.
Shirkov, Leonid; Makarewicz, Jan
2015-05-28
An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowed us to design an optimal basis set composed of a small Dunning's basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results. PMID:26026434
Generation of basis sets with high degree of fulfillment of the Hellmann-Feynman theorem.
Rico, J Fernández; López, R; Ema, I; Ramírez, G
2007-03-01
A direct relationship is established between the degree of fulfillment of the Hellman-Feynman (electrostatic) theorem, measured as the difference between energy derivatives and electrostatic forces, and the stability of the basis set, measured from the indices that characterize the distance of the space generated by the basis functions to the space of their derivatives with respect to the nuclear coordinates. On the basis of this relationship, a criterion for obtaining basis sets of moderate size with a high degree of fulfillment of the theorem is proposed. As an illustrative application, previously reported Slater basis sets are extended by using this criterion. The resulting augmented basis sets are tested on several molecules finding that the differences between energy gradient and electrostatic forces are reduced by at least one order of magnitude.
Derivation of a formula for the resonance integral for a nonorthogonal basis set
Yim, Yung-Chang; Eyring, Henry
1981-01-01
In a self-consistent field calculation, a formula for the off-diagonal matrix elements of the core Hamiltonian is derived for a nonorthogonal basis set by a polyatomic approach. A set of parameters is then introduced for the repulsion integral formula of Mataga-Nishimoto to fit the experimental data. The matrix elements computed for the nonorthogonal basis set in the π-electron approximation are transformed to those for an orthogonal basis set by the Löwdin symmetrical orthogonalization. PMID:16593009
Hill, J Grant
2013-09-30
Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit.
Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory
NASA Technical Reports Server (NTRS)
Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.
1990-01-01
New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.
NASA Astrophysics Data System (ADS)
Richard, Ryan M.; Herbert, John M.
2013-06-01
Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.
NASA Astrophysics Data System (ADS)
Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta
2015-02-01
The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91 kJ/mol and 114.32 kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99 kJ/mol and 117.08 kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29 Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22 Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions
Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta
2015-02-25
The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91kJ/mol and 114.32kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99kJ/mol and 117.08kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions when
Pseudospectral sampling of Gaussian basis sets as a new avenue to high-dimensional quantum dynamics
NASA Astrophysics Data System (ADS)
Heaps, Charles
This thesis presents a novel approach to modeling quantum molecular dynamics (QMD). Theoretical approaches to QMD are essential to understanding and predicting chemical reactivity and spectroscopy. We implement a method based on a trajectory-guided basis set. In this case, the nuclei are propagated in time using classical mechanics. Each nuclear configuration corresponds to a basis function in the quantum mechanical expansion. Using the time-dependent configurations as a basis set, we are able to evolve in time using relatively little information at each time step. We use a basis set of moving frozen (time-independent width) Gaussian functions that are well-known to provide a simple and efficient basis set for nuclear dynamics. We introduce a new perspective to trajectory-guided Gaussian basis sets based on existing numerical methods. The distinction is based on the Galerkin and collocation methods. In the former, the basis set is tested using basis functions, projecting the solution onto the functional space of the problem and requiring integration over all space. In the collocation method, the Dirac delta function tests the basis set, projecting the solution onto discrete points in space. This effectively reduces the integral evaluation to function evaluation, a fundamental characteristic of pseudospectral methods. We adopt this idea for independent trajectory-guided Gaussian basis functions. We investigate a series of anharmonic vibrational models describing dynamics in up to six dimensions. The pseudospectral sampling is found to be as accurate as full integral evaluation, while the former method is fully general and integration is only possible on very particular model potential energy surfaces. Nonadiabatic dynamics are also investigated in models of photodissociation and collinear triatomic vibronic coupling. Using Ehrenfest trajectories to guide the basis set on multiple surfaces, we observe convergence to exact results using hundreds of basis functions
Haghdani, Shokouh; Åstrand, Per-Olof; Koch, Henrik
2016-02-01
We have calculated the electronic optical rotation of seven molecules using coupled cluster singles-doubles (CCSD) and the second-order approximation (CC2) employing the aug-cc-pVXZ (X = D, T, or Q) basis sets. We have also compared to time-dependent density functional theory (TDDFT) by utilizing two functionals B3LYP and CAM-B3LYP and the same basis sets. Using relative and absolute error schemes, our calculations demonstrate that the CAM-B3LYP functional predicts optical rotation with the minimum deviations compared to CCSD at λ = 355 and 589.3 nm. Furthermore, our results illustrate that the aug-cc-pVDZ basis set provides the optical rotation in good agreement with the larger basis sets for molecules not possessing small-angle optical rotation at λ = 589.3 nm. We have also performed several two-point inverse power extrapolations for the basis set convergence, i.e., OR(∞) + AX(-n), using the CC2 model at λ = 355 and 589.3 nm. Our results reveal that a two-point inverse power extrapolation with the aug-cc-pVTZ and aug-cc-pVQZ basis sets at n = 5 provides optical rotation deviations similar to those of aug-cc-pV5Z with respect to the basis limit.
Adapting DFT+U for the Chemically Motivated Correction of Minimal Basis Set Incompleteness.
Kulik, Heather J; Seelam, Natasha; Mar, Brendan D; Martínez, Todd J
2016-07-28
Recent algorithmic and hardware advances have enabled the application of electronic structure methods to the study of large-scale systems such as proteins with O(10(3)) atoms. Most such methods benefit greatly from the use of reduced basis sets to further enhance their speed, but truly minimal basis sets are well-known to suffer from incompleteness error that gives rise to incorrect descriptions of chemical bonding, preventing minimal basis set use in production calculations. We present a strategy for improving these well-known shortcomings in minimal basis sets by selectively tuning the energetics and bonding of nitrogen and oxygen atoms within proteins and small molecules to reproduce polarized double-ζ basis set geometries at minimal basis set cost. We borrow the well-known +U correction from the density functional theory community normally employed for self-interaction errors and demonstrate its power in the context of correcting basis set incompleteness within a formally self-interaction-free Hartree-Fock framework. We tune the Hubbard U parameters for nitrogen and oxygen atoms on small-molecule tautomers (e.g., cytosine), demonstrate the applicability of the approach on a number of amide-containing molecules (e.g., formamide, alanine tripeptide), and test our strategy on a 10 protein test set where anomalous proton transfer events are reduced by 90% from RHF/STO-3G to RHF/STO-3G+U, bringing the latter into quantitative agreement with RHF/6-31G* results. Although developed with the study of biological molecules in mind, this empirically tuned U approach shows promise as an alternative strategy for correction of basis set incompleteness errors.
What is the most efficient way to reach the canonical MP2 basis set limit?
NASA Astrophysics Data System (ADS)
Liakos, Dimitrios G.; Izsák, Róbert; Valeev, Edward F.; Neese, Frank
2013-09-01
Various ways of reaching the complete basis set limit at the second-order Møller-Plesset perturbation theory (MP2) level are compared with respect to their cost-to-accuracy ratio. These include: (1) traditional MP2 calculations with correlation consistent basis sets of increasing size, with and without the resolution of identity for Coulomb and exchange (RIJK) or the combined RIJ and 'chain of spheres' (RIJCOSX) approximations; (2) basis set extrapolation obtained with the same MP2 variants; and (3) explicitly correlated F12-MP2 methods. The time required to solve the Hartree-Fock equations is part of the evaluation because the overall efficiency is of central interest in this work. Results were obtained for the ISO34, DC9 and S66 test sets and were analysed in terms of efficiency and accuracy for total energies, reaction energies and their effect on the basis set superposition error. Among the methods studied, the RIJK-MP2-F12 and RIJK-MP2-EP1 (where EP1 stands for 'Extrapolation Protocol 1' as explained in the text) methods perform outstandingly well. Although extrapolation is, in general, slightly faster than explicit correlation, it is found that for reaction energies, RIJK-MP2-F12 performs systematically better. This holds especially in combination with a triple zeta basis set, in which case it even outperforms the much more costly extrapolation involving quadruple- and quintuple-zeta correlation consistent basis sets.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-01
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. PMID:21992293
A Basis Set for Peptides for the Variational Approach to Conformational Kinetics.
Vitalini, F; Noé, F; Keller, B G
2015-09-01
Although Markov state models have proven to be powerful tools in resolving the complex features of biomolecular kinetics, the discretization of the conformational space has been a bottleneck since the advent of the method. A recently introduced variational approach, which uses basis functions instead of crisp conformational states, opened up a route to construct kinetic models in which the discretization error can be controlled systematically. Here, we develop and test a basis set for peptides to be used in the variational approach. The basis set is constructed by combining local residue-centered kinetic modes that are obtained from kinetic models of terminally blocked amino acids. Using this basis set, we model the conformational kinetics of two hexapeptides with sequences VGLAPG and VGVAPG. Six basis functions are sufficient to represent the slow kinetic modes of these peptides. The basis set also allows for a direct interpretation of the slow kinetic modes without an additional clustering in the space of the dominant eigenvectors. Moreover, changes in the conformational kinetics due to the exchange of leucine in VGLAPG to valine in VGVAPG can be directly quantified by comparing histograms of the basis set expansion coefficients.
NASA Astrophysics Data System (ADS)
Kupka, Teobald
2008-08-01
Based on B3LYP spin-spin coupling constants (SSCC) of several molecules calculated with cc-pV xZ, cc-pCV xZ, cc-pCV xZ-sd and cc-pCV xZ-sd+ t basis sets, a reasonably fit, using the two-parameter formula, to the Kohn-Sham complete basis set limit (CBS) is shown. Improvement in the CBS values going from cc-pV xZ to the most elaborated cc-pCV xZ-sd+ t basis set family is observed: standard deviation for all data drops from 33.7 to 23.1, and from 6.0 to 4.8 Hz after excluding problematic 1J(F,H) and 1J(F,C). Calculation of water's 1J(OH) using B3LYP/cc-pCV xZ and B3LYP/pcJ- n significantly improved the FC term convergence.
Peterson, Kirk A.; Figgen, Detlev; Goll, Erich; Stoll, Hermann; Dolg, Michael F.
2003-12-01
Series of correlation consistent basis sets have been developed for the post-d group 16-18 elements in conjunction with small-core relativistic pseudopotentials (PPs) of the energy-consistent variety. The latter were adjusted to multiconfiguration Dirac-Hartree-Fock data based on the Dirac-Coulomb-Breit Hamiltonian. The outer-core (n-1)spd shells are explicitly treated together with the nsp valence shell with these PPs. The accompanying cc-pVnZ-PP and aug-cc-pVnZ-PP basis sets range in size from DZ to 5Z quality and yield systematic convergence of both Hartree-Fock and correlated total energies. In addition to the calculation of atomic electron affinities and dipole polarizabilities of the rare gas atoms, numerous molecular benchmark calculations (HBr, HI, HAt, Br2, I2, At2, SiSe, SiTe, SiPo, KrH+, XeH+, and RnH+) are also reported at the coupled cluster level of theory. For the purposes of comparison, all-electron calculations using the Douglas-Kroll-Hess Hamiltonian have also been carried out for the halogen-containing molecules using basis sets of 5Z quality.
A novel Gaussian-Sinc mixed basis set for electronic structure calculations
NASA Astrophysics Data System (ADS)
Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.
2015-08-01
A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree-Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the "localized" and "delocalized" regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the "delocalized" regions and the atom-centered Gaussian functions are used to represent the "localized" regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definable and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree-Fock energies for atoms up to neon, the diatomic systems H2, O2, and N2, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.
A novel Gaussian-Sinc mixed basis set for electronic structure calculations
Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.
2015-08-14
A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definable and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.
A novel Gaussian-Sinc mixed basis set for electronic structure calculations.
Jerke, Jonathan L; Lee, Young; Tymczak, C J
2015-08-14
A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree-Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the "localized" and "delocalized" regions. A basis set for each region is combined to make a new basis methodology-a lattice of orthonormal sinc functions is used to represent the "delocalized" regions and the atom-centered Gaussian functions are used to represent the "localized" regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definable and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree-Fock energies for atoms up to neon, the diatomic systems H2, O2, and N2, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species. PMID:26277128
NASA Astrophysics Data System (ADS)
Haiduke, Roberto L. A.; Comar, Moacyr; da Silva, Albérico B. F.
2006-12-01
The prolapse-free relativistic adapted Gaussian basis sets (RAGBSs), developed by our research group on the basis of the four-component approach, are used for the first time in Douglas-Kroll-Hess 2nd order scalar relativistic calculations (DKH2) of simple diatomic molecules containing Hydrogen and the halogens from Fluorine up to Iodine: HX and X 2, where X = F, Cl, Br, and I. To this end, the RAGBSs were contracted with the general contraction scheme to triple-, quadruple-, and quintuple-zeta sets. Polarization functions were also added to the basis sets by optimization with the configuration interaction method including single and double excitations into the DKH2 environment, DKH2-CISD. The molecular properties were then calculated with the coupled cluster electronic correlation treatment and the DKH2 scalar relativistic method, DKH2-CCSD(T), and indicated that our RAGBSs should be contracted as quadruple-zeta basis sets. The results achieved with the DKH2-CCSD(T) calculations and the selected quadruple-zeta RAGBSs are able to reproduce the experimental data of equilibrium distances, dissociation energies, and harmonic vibrational frequencies with root-mean-square (rms) errors of 0.015 Å, 3.6 kcal mol -1, and 21.7 cm -1, respectively.
Magnetic properties with multiwavelets and DFT: the complete basis set limit achieved.
Jensen, Stig Rune; Flå, Tor; Jonsson, Dan; Monstad, Rune Sørland; Ruud, Kenneth; Frediani, Luca
2016-08-01
Multiwavelets are emerging as an attractive alternative to traditional basis sets such as Gaussian-type orbitals and plane waves. One of their distinctive properties is the ability to reach the basis set limit (often a chimera for traditional approaches) reliably and consistently by fixing the desired precision ε. We present our multiwavelet implementation of the linear response formalism, applied to static magnetic properties, at the self-consistent field level of theory (both for Hartree-Fock and density functional theories). We demonstrate that the multiwavelets consistently improve the accuracy of the results when increasing the desired precision, yielding results that have four to five digits precision, thus providing a very useful benchmark which could otherwise only be estimated by extrapolation methods. Our results show that magnetizabilities obtained with the augmented quadruple-ζ basis (aug-cc-pCVQZ) are practically at the basis set limit, whereas absolute nuclear magnetic resonance shielding tensors are more challenging: even by making use of a standard extrapolation method, the accuracy is not substantially improved. In contrast, our results provide a benchmark that: (1) confirms the validity of the extrapolation ansatz; (2) can be used as a reference to achieve a property-specific extrapolation scheme, thus providing a means to obtain much better extrapolated results; (3) allows us to separate functional-specific errors from basis-set ones and thus to assess the level of cancellation between basis set and functional errors often exploited in density functional theory. PMID:27087397
Magnetic properties with multiwavelets and DFT: the complete basis set limit achieved.
Jensen, Stig Rune; Flå, Tor; Jonsson, Dan; Monstad, Rune Sørland; Ruud, Kenneth; Frediani, Luca
2016-08-01
Multiwavelets are emerging as an attractive alternative to traditional basis sets such as Gaussian-type orbitals and plane waves. One of their distinctive properties is the ability to reach the basis set limit (often a chimera for traditional approaches) reliably and consistently by fixing the desired precision ε. We present our multiwavelet implementation of the linear response formalism, applied to static magnetic properties, at the self-consistent field level of theory (both for Hartree-Fock and density functional theories). We demonstrate that the multiwavelets consistently improve the accuracy of the results when increasing the desired precision, yielding results that have four to five digits precision, thus providing a very useful benchmark which could otherwise only be estimated by extrapolation methods. Our results show that magnetizabilities obtained with the augmented quadruple-ζ basis (aug-cc-pCVQZ) are practically at the basis set limit, whereas absolute nuclear magnetic resonance shielding tensors are more challenging: even by making use of a standard extrapolation method, the accuracy is not substantially improved. In contrast, our results provide a benchmark that: (1) confirms the validity of the extrapolation ansatz; (2) can be used as a reference to achieve a property-specific extrapolation scheme, thus providing a means to obtain much better extrapolated results; (3) allows us to separate functional-specific errors from basis-set ones and thus to assess the level of cancellation between basis set and functional errors often exploited in density functional theory.
Multiple-Timestep ab Initio Molecular Dynamics Using an Atomic Basis Set Partitioning.
Steele, Ryan P
2015-12-17
This work describes an approach to accelerate ab initio Born-Oppenheimer molecular dynamics (MD) simulations by exploiting the inherent timescale separation between contributions from different atom-centered Gaussian basis sets. Several MD steps are propagated with a cost-efficient, low-level basis set, after which a dynamical correction accounts for large basis set relaxation effects in a time-reversible fashion. This multiple-timestep scheme is shown to generate valid MD trajectories, on the basis of rigorous testing for water clusters, the methanol dimer, an alanine polypeptide, protonated hydrazine, and the oxidized water dimer. This new approach generates observables that are consistent with those of target basis set trajectories, including MD-based vibrational spectra. This protocol is shown to be valid for Hartree-Fock, density functional theory, and second-order Møller-Plesset perturbation theory approaches. Recommended pairings include 6-31G as a low-level basis set for 6-31G** or 6-311G**, as well as cc-pVDZ as the subset for accurate dynamics with aug-cc-pVTZ. Demonstrated cost savings include factors of 2.6-7.3 on the systems tested and are expected to remain valid across system sizes.
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Holden, Zachary C.; Richard, Ryan M.; Herbert, John M.
2013-12-28
An implementation of Ewald summation for use in mixed quantum mechanics/molecular mechanics (QM/MM) calculations is presented, which builds upon previous work by others that was limited to semi-empirical electronic structure for the QM region. Unlike previous work, our implementation describes the wave function's periodic images using “ChElPG” atomic charges, which are determined by fitting to the QM electrostatic potential evaluated on a real-space grid. This implementation is stable even for large Gaussian basis sets with diffuse exponents, and is thus appropriate when the QM region is described by a correlated wave function. Derivatives of the ChElPG charges with respect to the QM density matrix are a potentially serious bottleneck in this approach, so we introduce a ChElPG algorithm based on atom-centered Lebedev grids. The ChElPG charges thus obtained exhibit good rotational invariance even for sparse grids, enabling significant cost savings. Detailed analysis of the optimal choice of user-selected Ewald parameters, as well as timing breakdowns, is presented.
NASA Astrophysics Data System (ADS)
Park, R.; Jo, D.; Kim, M.; Spracklen, D. V.; Hodzic, A.
2014-12-01
Organic aerosol (OA) constitutes significant mass fractions (20-90%) of total dry fine aerosols in the atmosphere. However, global models of OA have shown large discrepancies when compared to the observations because of the limited capability to simulate secondary OA (SOA). For reducing the discrepancies between observations and models, recent studies have shown that chemical aging reactions in the atmosphere are important because they can lead to decreases in organic volatility, resulting in increase of SOA mass yields. To efficiently simulate chemical aging of SOA in the atmosphere, we implemented the volatility basis set approach in a global 3-D chemical transport model (GEOS-Chem). We present full-year simulations and their comparisons with multiple observations - global aerosol mass spectrometer dataset, the Interagency Monitoring of Protected Visual Environments from the United States, the European Monitoring and Evaluation Programme dataset and water-soluble organic carbon observation data collected over East Asia. Using different input parameters in the model, we also explore the uncertainty of the SOA simulation for which we use an observational constraint to find the optimized values with which the model reduces the discrepancy from the observations. Finally, we estimate the effect of OA on climate using our best simulation results.
Gusso, Michele
2008-01-28
A detailed study on the accuracy attainable with numerical atomic orbitals in the context of pseudopotential first-principles density functional theory is presented. Dimers of first- and second-row elements are analyzed: bond lengths, atomization energies, and Kohn-Sham eigenvalue spectra obtained with localized orbitals and with plane-wave basis sets are compared. For each dimer, the cutoff radius, the shape, and the number of the atomic basis orbitals are varied in order to maximize the accuracy of the calculations. Optimized atomic orbitals are obtained following two routes: (i) maximization of the projection of plane wave results into atomic orbital basis sets and (ii) minimization of the total energy with respect to a set of primitive atomic orbitals as implemented in the OPENMX software package. It is found that by optimizing the numerical basis, chemical accuracy can be obtained even with a small set of orbitals.
Balanced Basis Sets in the Calculation of Potential Energy Curves for Diatomic Molecules.
NASA Astrophysics Data System (ADS)
Barclay, V. J.
"Balanced" basis sets, which describe the internuclear region as well as the nuclear region, are examined in the context of an ab initio selection-extrapolation configuration -interaction method (MRD-CI). The sets are balanced by adding bond functions (BF's), which are s, p and d-type orbitals at the bond mid-point, to atomic-centred molecular basis sets, which have double and triple sets of valence -shell orbitals (DZ and TZ) and one or two sets of polarization functions (PF's). Potential energy curves and spectroscopic constants were calculated for the ground states of the hydrides H _2, OH, NaH, MgH, MH, SiH, PH, SH, HCl, and for the ionized species OH^+ and OH^{++}, and for the A^3Sigma_{u}, w^3Delta_{u} and B^3Pi_{g} excited states of N_2. The basis sets containing bond functions gave curves and constants superior to the DZP and (where calculated) TZPP results, and of quality similar to large basis set calculations in the literature. The single and double ionization potentials of OH, and the term energies of the N_2 excited states had error at the atomic asymptotes for all basis sets. The dissociation energies of the ground states of ten first-row diatomics (C_2, N_2, O_2, F_2, CN, CO, CF, NO, NF, and FO) were studied using balanced basis sets. A correlation was found to exist between the actual bond order of a species, and the number and kinds of orbitals which comprise the optimum BF. For MRD-CI diatomic calculations, the following BF's should be added to a DZP basis set (sp) (for a bond order of 1); 2(sp) (B. O. 1.5); (spd) (B. O. 2); 3(sp) (B. O. 2.5); 2(spd) (B. O. 3). The prescribed BF basis method was tested on the 26 second-row congeners Si _2, P_2, S _2, Cl_2, SiP, SiS, SiCl, PS, PCl, and ClS, and mixed-row congeners SiN, SiO, SiF, PO, PF, SF, SiC, PN, SO, ClF, CP, CS, CCl, NS, NCl, and ClO. An average error of 6% and a maximum error of 10% relative to known experimental D_{e }'s was found: compared to an average error of 18% for TZPP calculations
Representability of Bloch states on Projector-augmented-wave (PAW) basis sets
NASA Astrophysics Data System (ADS)
Agapito, Luis; Ferretti, Andrea; Curtarolo, Stefano; Buongiorno Nardelli, Marco
2015-03-01
Design of small, yet `complete', localized basis sets is necessary for an efficient dual representation of Bloch states on both plane-wave and localized basis. Such simultaneous dual representation permits the development of faster more accurate (beyond DFT) electronic-structure methods for atomistic materials (e.g. the ACBN0 method.) by benefiting from algorithms (real and reciprocal space) and hardware acceleration (e.g. GPUs) used in the quantum-chemistry and solid-state communities. Finding a `complete' atomic-orbital basis (partial waves) is also a requirement in the generation of robust and transferable PAW pseudopotentials. We have employed the atomic-orbital basis from available PAW data sets, which extends through most of the periodic table, and tested the representability of Bloch states on such basis. Our results show that PAW data sets allow systematic and accurate representability of the PAW Bloch states, better than with traditional quantum-chemistry double-zeta- and double-zeta-polarized-quality basis sets.
Efficient calculation of integrals in mixed ramp-Gaussian basis sets
McKemmish, Laura K.
2015-04-07
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or very large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.
Macedo, Luiz Guilherme M de Borin, Antonio Carlos; Silva, Alberico B.F. da
2007-11-15
Prolapse-free basis sets suitable for four-component relativistic quantum chemical calculations are presented for the superheavy elements up to {sub 118}Uuo ({sub 104}Rf, {sub 105}Db, {sub 106}Sg, {sub 107}Bh, {sub 108}Hs, {sub 109}Mt, {sub 110}Ds, {sub 111}Rg, {sub 112}Uub, {sub 113}Uut, {sub 114}Uuq, {sub 115}Uup, {sub 116}Uuh, {sub 117}Uus, {sub 118}Uuo) and {sub 103}Lr. These basis sets were optimized by minimizing the absolute values of the energy difference between the Dirac-Fock-Roothaan total energy and the corresponding numerical value at a milli-Hartree order of magnitude, resulting in a good balance between cost and accuracy. Parameters for generating exponents and new numerical data for some superheavy elements are also presented.
Non-Euclidean basis function based level set segmentation with statistical shape prior.
Ruiz, Esmeralda; Reisert, Marco; Bai, Li
2013-01-01
We present a new framework for image segmentation with statistical shape model enhanced level sets represented as a linear combination of non-Euclidean radial basis functions (RBFs). The shape prior for the level set is represented as a probabilistic map created from the training data and registered with the target image. The new framework has the following advantages: 1) the explicit RBF representation of the level set allows the level set evolution to be represented as ordinary differential equations and reinitialization is no longer required. 2) The non-Euclidean distance RBFs makes it possible to incorporate image information into the basis functions, which results in more accurate and topologically more flexible solutions. Experimental results are presented to demonstrate the advantages of the method, as well as critical analysis of level sets versus the combination of both methods.
Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization
NASA Astrophysics Data System (ADS)
Subramani, Deepak N.; Lermusiaux, Pierre F. J.
2016-04-01
A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.
On the basis set convergence of electron-electron entanglement measures: helium-like systems.
Hofer, Thomas S
2013-01-01
A systematic investigation of three different electron-electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li(+) and Be(2+) using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one-electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li(+) and Be(2+). In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used.
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.
Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F
2016-10-01
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc. PMID:27510431
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.
Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F
2016-10-01
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc.
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-07
We report the variation of the binding energy of the formic acid dimer at the CCSD(T)/ Complete Basis Set limit and examine the validity of the BSSE-correction, previously challenged by Kalescky, Kraka and Cremer [J. Chem. Phys. 140 (2014) 084315]. Our best estimate of D0=14.3±0.1 kcal/mol is in excellent agreement with the experimental value of 14.22±0.12 kcal/mol. The BSSE correction is indeed valid for this system since it exhibits the expected behavior of decreasing with increasing basis set size and its inclusion produces the same limit (within 0.1 kcal/mol) as the one obtained from extrapolation of the uncorrected binding energy. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. A portion of this research was performed using the Molecular Science Computing Facility (MSCF) in EMSL, a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research and located at PNNL.
Molecular Dipole Moments within the Incremental Scheme Using the Domain-Specific Basis-Set Approach.
Fiedler, Benjamin; Coriani, Sonia; Friedrich, Joachim
2016-07-12
We present the first implementation of the fully automated incremental scheme for CCSD unrelaxed dipole moments using the domain-specific basis-set approach. Truncation parameters are varied, and the accuracy of the method is statistically analyzed for a test set of 20 molecules. The local approximations introduce small errors at second order and negligible ones at third order. For a third-order incremental CCSD expansion with a CC2 error correction, a cc-pVDZ/SV domain-specific basis set (tmain = 3.5 Bohr), and the truncation parameter f = 30 Bohr, we obtain a mean error of 0.00 mau (-0.20 mau) and a standard deviation of 1.95 mau (2.17 mau) for the total dipole moments (Cartesian components of the dipole vectors). By analyzing incremental CCSD energies, we demonstrate that the MP2 and CC2 error correction schemes are an exclusive correction for the domain-specific basis-set error. Our implementation of the incremental scheme provides fully automated computations of highly accurate dipole moments at reduced computational cost and is fully parallelized in terms of the calculation of the increments. Therefore, one can utilize the incremental scheme, on the same hardware, to extend the basis set in comparison to standard CCSD and thus obtain a better total accuracy. PMID:27300371
How to spoil a good basis set for Rayleigh-Ritz calculations
Pupyshev, Vladimir I.; Montgomery, H. E. Jr.
2013-08-15
For model quantum mechanical systems such as the harmonic oscillator and a particle in an impenetrable box, we consider the set of exact discrete spectrum functions and define the modified basis set by subtraction of the ground state wavefunction from all the other wavefunctions with some real weights. It is demonstrated that the modified set of functions is complete in the space of square integrable functions if and only if the series of the squared weights diverges. A similar, but nonequivalent criterion is derived for convergence of Rayleigh-Ritz ground state energy calculations to the exact ground state energy value with the basis set extension. Some numerical illustrations are provided which demonstrate a wide variety of possible situations for model systems.
Matus, Myrna H; Garza, Jorge; Galván, Marcelo
2004-06-01
In order to study the Kohn-Sham frontier molecular orbital energies in the complete basis limit, a comparative study between localized functions and plane waves, obtained with the local density approximation exchange-correlation functional is made. The analyzed systems are ethylene and butadiene, since they are theoretical and experimentally well characterized. The localized basis sets used are those developed by Dunning. For the plane-waves method, the pseudopotential approximation is employed. The results obtained by the localized basis sets suggest that it is possible to get an estimation of the orbital energies in the limit of the complete basis set, when the basis set size is large. It is shown that the frontier molecular orbital energies and the energy gaps obtained with plane waves are similar to those obtained with a large localized basis set, when the size of the supercell and the plane-wave expansion have been appropriately calibrated.
A Novel Gaussian-Sinc mixed Basis Set for Electronic Structure calculations
NASA Astrophysics Data System (ADS)
Jerke, Jonathan; Lee, Young; Tymczak, C. J.
2015-03-01
A Gaussian-Sinc mixed basis set for the computation of the electronic structure of atoms and molecules is presented. Excellent bases functions are known for ``core'' and ``valence'' separately, such as Gaussians for the ``core'' wave functions and Plane-waves for ``valance'' wave functions, but as yet no method is known that can accurately deal with both regimes in a single basis. A Gaussian-Sinc mixed basis can do both. This method resolves several issues such as: i) the Sincs basis spans the same space as the plane-waves basis, yet are semi-local enough to define all interaction elements including Exchange; ii) the Gaussians span the spherically symmetric core states and can be mixed with the Sinc functions in a computationally efficient methodology; iii) together, this mixed basis set is a flexible, computationally efficient and a highly accurate method for solving atomic and molecular problems. This methodology has been implemented within the Hartree-Fock level of theory within ultra-strong magnetic fields. To demonstrate the utility of this new method, we calculated the ground state Hartree-Fock energies to five digits accuracy in ultra strong magnetic fields for Helium to Neon, Molecular Hydrogen, Water, Carbon dioxide and Benzene. Welch Foundation (Grant J-1675), the ARO (Grant W911Nf-13-1-0162), the Texas Southern University High Performance Computing Center (http:/hpcc.tsu.edu/; Grant PHY-1126251) and NSF-CREST CRCN project (Grant HRD-1137732).
Correlation consistent basis sets for actinides. I. The Th and U atoms
Peterson, Kirk A.
2015-02-21
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2
Training set optimization under population structure in genomic selection
Technology Transfer Automated Retrieval System (TEKTRAN)
The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...
Level set based structural topology optimization for minimizing frequency response
NASA Astrophysics Data System (ADS)
Shu, Lei; Wang, Michael Yu; Fang, Zongde; Ma, Zhengdong; Wei, Peng
2011-11-01
For the purpose of structure vibration reduction, a structural topology optimization for minimizing frequency response is proposed based on the level set method. The objective of the present study is to minimize the frequency response at the specified points or surfaces on the structure with an excitation frequency or a frequency range, subject to the given amount of the material over the admissible design domain. The sensitivity analysis with respect to the structural boundaries is carried out, while the Extended finite element method (X-FEM) is employed for solving the state equation and the adjoint equation. The optimal structure with smooth boundaries is obtained by the level set evolution with advection velocity, derived from the sensitivity analysis and the optimization algorithm. A number of numerical examples, in the frameworks of two-dimension (2D) and three-dimension (3D), are presented to demonstrate the feasibility and effectiveness of the proposed approach.
Level-Set Topology Optimization with Aeroelastic Constraints
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Teodoro, Tiago Quevedo; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2014-09-01
This study reports a new relativistic prolapse-free Gaussian basis set series of quadruple-ζ quality, RPF-4Z, and an augmented version that includes extra diffuse functions, aug-RPF-4Z, for all the s- and p-block elements. The relativistic adapted Gaussian basis sets (RAGBSs), which are free of variational prolapse, were used as the starting primitive sets. Exponents of correlating/polarization functions were taken from a polynomial version of the generator coordinate Dirac-Fock (p-GCDF) method, in which the previously optimized RAGBS parameters are applied. By using such procedure we aimed to reduce the computational demand of these sets in comparison with fully optimized ones. The effect of these basis set increments on the correlation energy was evaluated by atomic multireference configuration interaction calculations with single and double excitations out of the valence shell. Finally, atomic and molecular calculations of fundamental properties (bond lengths, vibrational frequencies, dipole moments and electron affinities) corroborate the quadruple-ζ quality of these new sets that are also about half-time-consuming than the correspondent Dyall's v4z sets. The read-to use format of these (aug-)RPF-4Fz sets are available as Supporting Information files and can also be found at http://basis-sets.iqsc.usp.br/ . PMID:26588525
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Finite set control transcription for optimal control applications
NASA Astrophysics Data System (ADS)
Stanton, Stuart Andrew
An enhanced method in optimization rooted in direct collocation is formulated to treat the finite set optimal control problem. This is motivated by applications in which a hybrid dynamical system is subject to ordinary differential continuity constraints, but control variables are contained within finite spaces. Resulting solutions display control discontinuities as variables switch between one feasible value to another. Solutions derived are characterized as optimal switching schedules between feasible control values. The methodology allows control switches to be determined over a continuous spectrum, overcoming many of the limitations associated with discretized solutions. Implementation details are presented and several applications demonstrate the method's utility and capability. Simple applications highlight the effectiveness of the methodology, while complicated dynamic systems showcase its relevance. A key example considers the challenges associated with libration point formations. Extensions are proposed for broader classes of hybrid systems.
Relativistic correlating basis sets for lanthanide atoms from Ce to Lu.
Sekiya, Masahiro; Noro, Takeshi; Miyoshi, Eisaku; Osanai, You; Koga, Toshikatsu
2006-03-01
Contracted Gaussian-type function (CGTF) sets for the description of the 4f subshell correlation and of the 6s and 5d subshell correlation are developed for lanthanide atoms from Ce to Yb. Also prepared are basis sets for the 5d orbitals, which are vacant in the ground states of most lanthanide atoms but are essential in molecular environments. In addition, correlating CGTF sets for the 4f subshell correlation are supplemented for the Lu atom. A segmented contraction scheme is employed for their compactness and efficiency. Contraction coefficients and exponents are determined by minimizing the deviation from accurate natural orbitals generated from configuration interaction calculations that include relativistic effects through the third-order Douglas-Kroll approximation. All-electron and model core potential calculations with the present correlating sets are performed on the ground state of the diatomic CeO molecule. The calculated spectroscopic constants are in good agreement with experimental values. PMID:16419148
Method and Basis Set Analysis of Oxorhenium(V) Complexes for Theoretical Calculations
Demoin, Dustin Wayne; Li, Yawen; Jurisson, Silvia S.; Deakyne, Carol A.
2012-01-01
A variety of method and basis set combinations has been evaluated for monooxorhenium(V) complexes with N, O, P, S, Cl, and Se donor atoms. The geometries and energies obtained are compared to both high-level computations and literature structures. These calculations show that the PBE0 method outperforms the B3LYP method with respect to both structure and energetics. The combination of 6-31G** basis set on the nonmetal atoms and LANL2TZ effective core potential on the rhenium center gives reliable equilibrium structures with minimal computational resources for both model and literature compounds. Single-point energy calculations at the PBE0/LANL2TZ,6-311+G* level of theory are recommended for energetics. PMID:23087847
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn
2015-03-01
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.
Relativistic correlating basis sets for actinide atoms from 90Th to 103Lr.
Noro, Takeshi; Sekiya, Masahiro; Osanai, You; Koga, Toshikatsu; Matsuyama, Hisashi
2007-12-01
For 14 actinide atoms from (90)Th to (103)Lr, contracted Gaussian-type function sets are developed for the description of correlations of the 5f, 6d, and 7s electrons. Basis sets for the 6d orbitals are also prepared, since the orbitals are important in molecular environments despite their vacancy in the ground state of some actinides. A segmented contraction scheme is employed for the compactness and efficiency. Contraction coefficients and exponents are so determined as to minimize the deviation from accurate natural orbitals of the lowest term arising from the 5f(n-1)6d(1)7s(2) configuration. The spin-free relativistic effects are considered through the third-order Douglas-Kroll approximation. To test the present correlating sets, all-electron calculations are performed on the ground state of (90)ThO molecule. The calculated spectroscopic constants are in excellent agreement with experimental values.
XFEM schemes for level set based structural optimization
NASA Astrophysics Data System (ADS)
Li, Li; Wang, Michael Yu; Wei, Peng
2012-12-01
In this paper, some elegant extended finite element method (XFEM) schemes for level set method structural optimization are proposed. Firstly, two-dimension (2D) and three-dimension (3D) XFEM schemes with partition integral method are developed and numerical examples are employed to evaluate their accuracy, which indicate that an accurate analysis result can be obtained on the structural boundary. Furthermore, the methods for improving the computational accuracy and efficiency of XFEM are studied, which include the XFEM integral scheme without quadrature sub-cells and higher order element XFEM scheme. Numerical examples show that the XFEM scheme without quadrature sub-cells can yield similar accuracy of structural analysis while prominently reducing the time cost and that higher order XFEM elements can improve the computational accuracy of structural analysis in the boundary elements, but the time cost is increasing. Therefore, the balance of time cost between FE system scale and the order of element needs to be discussed. Finally, the reliability and advantages of the proposed XFEM schemes are illustrated with several 2D and 3D mean compliance minimization examples that are widely used in the recent literature of structural topology optimization. All numerical results demonstrate that the proposed XFEM is a promising structural analysis approach for structural optimization with the level set method.
Ghost transmission: How large basis sets can make electron transport calculations worse
Herrmann, Carmen; Solomon, Gemma C.; Subotnik, Joseph E.; Mujica, Vladimiro; Ratner, Mark A.
2010-01-01
The Landauer approach has proven to be an invaluable tool for calculating the electron transport properties of single molecules, especially when combined with a nonequilibrium Green’s function approach and Kohn–Sham density functional theory. However, when using large nonorthogonal atom-centered basis sets, such as those common in quantum chemistry, one can find erroneous results if the Landauer approach is applied blindly. In fact, basis sets of triple-zeta quality or higher sometimes result in an artificially high transmission and possibly even qualitatively wrong conclusions regarding chemical trends. In these cases, transport persists when molecular atoms are replaced by basis functions alone (“ghost atoms”). The occurrence of such ghost transmission is correlated with low-energy virtual molecular orbitals of the central subsystem and may be interpreted as a biased and thus inaccurate description of vacuum transmission. An approximate practical correction scheme is to calculate the ghost transmission and subtract it from the full transmission. As a further consequence of this study, it is recommended that sensitive molecules be used for parameter studies, in particular those whose transmission functions show antiresonance features such as benzene-based systems connected to the electrodes in meta positions and other low-conducting systems such as alkanes and silanes.
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
Computation of two-electron screened Coulomb potential integrals in Hylleraas basis sets
NASA Astrophysics Data System (ADS)
Jiao, Li Guang; Ho, Yew Kam
2015-03-01
The Gegenbauer expansion and Taylor expansion methods are developed to accurately and efficiently calculate the two-electron screened Coulomb potential integrals in Hylleraas basis sets. The combination of these two methods covers the entire parameter space of the integrals, including arbitrary total angular momenta, two-electron configurations, powers of inter-electronic coordinate, and complex screening parameters. Numerical examples are given and comparisons with other computational methods in some restricted situations are made. The present methods can be easily applied to calculate the bound and resonant states of two-electron atoms or exotic three-body systems embedded in the screening environment by using the Hylleraas or Hylleraas-CI basis functions.
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-01
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
NASA Astrophysics Data System (ADS)
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-01
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
Grimme, Stefan Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT
Asynchronous parallel generating set search for linearly-constrained optimization.
Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson
2006-08-01
Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.
NASA Astrophysics Data System (ADS)
Woody, M. C.; Arunachalam, S.; Binkowski, F.; West, J.; Jathar, S.; Robinson, A. L.
2012-12-01
Regional air quality studies aimed at quantifying the impacts of aviation emissions to PM2.5 have generally predicted relatively low contributions from organic aerosols. However, recent sampling and smog chamber experiments have suggested that organic aerosols comprise a significant fraction of total PM2.5 formed from aircraft emissions. In this study, the results of aircraft-specific sampling and smog chamber experiments are incorporated into a regional chemical transport model with the volatility basis set and used to predict organic aerosol contributions from aircraft emissions. Contributions of aircraft emissions to primary organic aerosols (POA), secondary organic aerosols (SOA) formed from traditional precursors (e.g. aromatics and long-chain alkanes), and non-traditional SOA formed from unidentified precursors previously unaccounted for in air quality models are modeled using the volatility basis set approach in CMAQ v4.7.1. The model includes oxidation reactions of traditional SOA (both biogenic and anthropogenic) and non-traditional SOA precursors (specific to aircraft emissions) with OH to produce products of lower volatility. Non-traditional SOA yields and precursor emission estimates for idle and non-idle aircraft activities are based on sampling and smog chamber experiments. This model predicts the organic aerosol and total PM2.5 concentrations formed from aircraft emissions due to landing and takeoff activities at the Hartsfield-Jackson International Airport in Atlanta during January and July, 2002. Overall model results are compared against monitoring data in the region to determine the impacts of using the volatility basis set on CMAQ model performance.
Optimal Sets of Projections of High-Dimensional Data.
Lehmann, Dirk J; Theisel, Holger
2016-01-01
Finding good projections of n-dimensional datasets into a 2D visualization domain is one of the most important problems in Information Visualization. Users are interested in getting maximal insight into the data by exploring a minimal number of projections. However, if the number is too small or improper projections are used, then important data patterns might be overlooked. We propose a data-driven approach to find minimal sets of projections that uniquely show certain data patterns. For this we introduce a dissimilarity measure of data projections that discards affine transformations of projections and prevents repetitions of the same data patterns. Based on this, we provide complete data tours of at most n/2 projections. Furthermore, we propose optimal paths of projection matrices for an interactive data exploration. We illustrate our technique with a set of state-of-the-art real high-dimensional benchmark datasets.
Imputing gene expression from optimally reduced probe sets
Donner, Yoni; Feng, Ting; Benoist, Christophe; Koller, Daphne
2012-01-01
Measuring complete gene expression profiles for a large number of experiments is costly. We propose an approach in which a small subset of probes is selected based on a preliminary set of full expression profiles. In subsequent experiments, only the subset is measured, and the missing values are imputed. We develop several algorithms to simultaneously select probes and impute missing values, and demonstrate that these probe selection for imputation (PSI) algorithms can successfully reconstruct missing gene expression values in a wide variety of applications, as evaluated using multiple metrics of biological importance. We analyze the performance of PSI methods under varying conditions, provide guidelines for choosing the optimal method based on the experimental setting, and indicate how to estimate imputation accuracy. Finally, we apply our approach to a large-scale study of immune system variation. PMID:23064520
Correlation consistent basis sets for lanthanides: The atoms La-Lu
NASA Astrophysics Data System (ADS)
Lu, Qing; Peterson, Kirk A.
2016-08-01
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd2, GdF, and GdF3. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd2, 151.7 (-36.6) for GdF, and 447.1 (-295.2) for GdF3.
NASA Astrophysics Data System (ADS)
Martins, L. S. C.; Jorge, F. E.; Machado, S. F.
2015-11-01
All-electron contracted Gaussian basis set of triple zeta valence quality plus polarisation functions (TZP) for the elements Cs, Ba, La, and from Hf to Rn is presented. Douglas-Kroll-Hess (DKH) basis set for fifth-row elements is also reported. We have recontracted the original TZP basis set, i.e., the values of the contraction coefficients are re-optimised using the second-order DKH Hamiltonian. By addition of diffuse functions (s, p, d, f, and g symmetries), which are optimised for the anion ground states, an augmented TZP basis set is constructed. Using the B3LYP hybrid functional, the performance of the TZP-DKH basis set is assessed for predicting atomic ionisation energy as well as spectroscopy constants of some compounds. Despite its compact size, this set demonstrates consistent, efficient, and reliable performance and will be especially useful in calculations of molecular properties that require explicit treatment of the core electrons.
Optimal sets of measurement data for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Bär, M.; Dersch, U.
2007-06-01
We discuss numerical algorithms for the determination of periodic surface structures from light diffraction patterns. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to simple periodic line structures in order to determine parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of diffraction is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to special efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping. To improve reconstruction and convergence, we determine optimal sets of efficiencies optimizing the condition numbers of the corresponding Jacobians. Numerical examples are presented for "chrome on glass" masks under the wavelength 632.8 nm and for EUV masks.
Chacon-Madrid, Heber J; Murphy, Benjamin N; Pandis, Spyros N; Donahue, Neil M
2012-10-16
We use a two-dimensional volatility basis set (2D-VBS) box model to simulate secondary organic aerosol (SOA) mass yields of linear oxygenated molecules: n-tridecanal, 2- and 7-tridecanone, 2- and 7-tridecanol, and n-pentadecane. A hybrid model with explicit, a priori treatment of the first-generation products for each precursor molecule, followed by a generic 2D-VBS mechanism for later-generation chemistry, results in excellent model-measurement agreement. This strongly confirms that the 2D-VBS mechanism is a predictive tool for SOA modeling but also suggests that certain important first-generation products for major primary SOA precursors should be treated explicitly for optimal SOA predictions.
Ideal basis sets for the Dirac Coulomb problem: Eigenvalue bounds and convergence proofs
NASA Astrophysics Data System (ADS)
Munger, Charles Thomas
2007-02-01
Basis sets are developed for the Dirac Coulomb Hamiltonian for which the resulting numerical eigenvalues and eigenfunctions are proved mathematically to have all the following properties: to converge to the exact eigenfunctions and eigenvalues, with necessary and sufficient conditions for convergence being known; to have neither missing nor spurious states; to maintain the Coulomb symmetries between eigenvalues and eigenfunctions of the opposite sign of the Dirac quantum number κ; to have positive eigenvalues bounded from below by the corresponding exact eigenvalues; and to have negative eigenvalues bounded from above by -mc2. All these properties are maintained using functions that may be analytic or nonanalytic (e.g., Slater functions or splines); that match the noninteger power dependence of the exact eigenfunctions at the origin, or that do not; or that extend to +∞ as do the exact eigenfunctions, or that vanish outside a cavity of large radius R (convergence then occurring after a second limit, R →∞). The same basis sets can be used without modification for potentials other than the Coulomb, such as the potential of a finite distribution of nuclear charge, or a screened Coulomb potential; the error in a numerical eigenvalue is shown to be second order in the departure of the potential from the Coulomb. In certain bases of Sturmian functions the numerical eigenvalues can be related to the zeros of the Pollaczek polynomials.
Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets
NASA Astrophysics Data System (ADS)
Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.
2011-10-01
The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.
Analytic basis set for high-Z atomic QED calculations: Heavy He-like ions
Hylton, D.J.; Snyderman, N.J.
1997-04-01
A relativistic Sturmian analytic basis set representation for the Coulomb-Dirac Green function, previously studied by Zapryagaev, Manakov, and Pal{close_quote}chikov [Opt. Spectrosc. {bold 52}, 248 (1982)], is investigated for application to high-Z atomic QED calculations. This pseudoeigenfunction representation follows from exact identities starting from the Whittaker function representation. It eliminates the radial ordering problem of that representation, and so is particularly useful for numerical calculation of the perturbation theory Feynman diagrams with more than one electron Green function. While the Green function represents discrete bound states, and both positive and negative energy continuum states, the Sturmian (bound-state-like) form for the pseudoeigenfunctions makes it possible to more analytically calculate matrix elements for full photon exchange, reducing numerical problems for high photon frequency. For He-like Fm (Z=100) we calculate the perturbation theory equivalent of the Dirac-Fock-Breit ground-state energy, agreeing well with the Grant code and with the numerical B-spline basis set approach results of Blundell, Mohr, Johnson, and Sapirstein [Phys. Rev. A {bold 48}, 2615 (1993)]. Preliminary results on the relativistic and QED correlation are also reported. {copyright} {ital 1997} {ital The American Physical Society}
Nonlinearly-constrained optimization using asynchronous parallel generating set search.
Griffin, Joshua D.; Kolda, Tamara Gibson
2007-05-01
Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax = (l,m)max, while scattering matrices, which determine spectral properties, are truncated at Ltr = (l,m)tr where phase shifts δl>ltr are negligible. Historically, Lmax is set equal to Ltr, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax > Ltr with δl>ltr set to zero [Zhang andmore » Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N(ltr + 1)2] and includes higher-L contributions via linear algebra [R2 process with rank N(lmax +1)2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L10 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Ltr.« less
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L_{max} = (l,m)_{max}, while scattering matrices, which determine spectral properties, are truncated at L_{tr} = (l,m)_{tr} where phase shifts δl>l_{tr} are negligible. Historically, L_{max} is set equal to L_{tr}, which is correct for large enough L_{max} but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L_{max} > L_{tr} with δl>l_{tr} set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R^{3} process with rank N(l_{tr} + 1)^{2}] and includes higher-L contributions via linear algebra [R^{2} process with rank N(l_{max} +1)^{2}]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1_{0} CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L_{max} for a given L_{tr}.
Nada, R.; Nicholas, J.B.; McCarthy, M.I.; Hess, A.C.
1996-11-15
Silica sodalite is an ideal model system to establish base-line computer requirements of ab initio periodic Hartree-Fock (PHF) calculations of zeolites. In this article, the authors investigate the effect of various basis sets on the structural and electronic properties of bulk silica sodalite. They also study the interaction of He, Ne, and Ar with the sodalite cage. This work shows that basis-set superposition errors (BSSE) in calculations using STO-3G and 6-21G(*) basis sets are as large as the interaction energies, leading to poor confidence in the results. To cure this problem, the authors present high-quality basis sets for si, O, He, Ne, and Ar, optimized for use with PHF methods, and demonstrate that the new basis set greatly reduces BSSE. The theoretical barriers for transfer of the rare gases between sodalite cages are 5.6, 13.2, and 62.1 kcal/mol for He, Ne, and Ar. 27 refs., 6 figs., 8 tabs.
TiCl, TiH and TiH+ Bond Energies, a Test of a Correlation Consistent Ti Basis Set
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1999-01-01
Correlation consistent basis sets are developed for Ti atom. The polarization functions are optimized for the average of the 3F and 5F states. One series of correlation consistent basis sets is for 3d and 4s correlation, while the second series includes 3s and 3p correlation as well as 3d and 4s correlation. These basis sets are tested using the Ti 3F-5F separation and the dissociation energies of TiCl X4Phi, TiH X4Phi, and TiH(+) X3Phi. The CCSD(T) complete basis set limit values are determined by extrapolation. The Douglas Kroll approach is used to compute the scalar relativistic effect. Spin-orbit effects are taken from experiment and/or computed at the CASSCF level. The Ti 3F-5F separation is in excellent agreement with experiment, while the TiCl, TiH, and TiH(+) bond energies are in good agreement with experiment. Extrapolation with the valence basis set is consistent with other atoms, while including 3s and 3p correlation appears to make extrapolation.
Analysis of cornea curvature using radial basis functions - Part II: Fitting to data-set.
Griffiths, G W; Płociniczak, Ł; Schiesser, W E
2016-10-01
In part I we discussed the solution of corneal curvature using a 2D meshless method based on radial basis functions (RBFs). In Part II we use these methods to fit a full nonlinear thin membrane model to a measured data-set in order to generate a topological mathematical description of the cornea. In addition, we show how these results can lead to estimations for corneal radius of curvature and certain physical properties of the cornea; namely, tension and elasticity coefficient. Again all calculations and graphics generation were performed using the R language programming environment. The model describes corneal topology extremely well, and the estimated properties fall well within the expected range of values. The method is straight forward to implement and offers scope for further analysis using more detailed 3D models that include corneal thickness. PMID:27570056
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
NASA Astrophysics Data System (ADS)
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-01
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S K; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Optimal gate-width setting for passive neutrons multiplicity counting
Croft, Stephen; Evans, Louise G; Schear, Melissa A
2010-01-01
When setting up a passive neutron coincidence counter it is natural to ask what coincidence gate settings should be used to optimize the counting precision. If the gate width is too short then signal is lost and the precision is compromised because in a given period only a few coincidence events will be observed. On the other hand if the gate is too large the signal will be maximized but it will also be compromised by the high level of random pile-up or Accidental coincidence events which must be subtracted. In the case of shift register electronics connected to an assay chamber with an exponential dieaway profile operating in the regime where the Accidentals rate dominates the Reals coincidence rate but where dead-time is not a concern, simple arguments allow one to show that the relative precision on the net Reals rate is minimized when the coincidence gate is set to about 1.2 times the lie dieaway time of the system. In this work we show that making the same assumptions it is easy to show that the relative precision on the Triples rates is also at a minimum when the relative precision of the Doubles (or Reals) is at a minimum. Although the analysis is straightforward to our knowledge such a discussion has not been documented in the literature before. Actual measurement systems do not always behave in the ideal we choose to model them. Fortunately however the variation in the relative precision as a function of gate width is rather flat for traditional safeguards counters and so the performance is somewhat forgiving of the exact choice. The derivation further serves to delineate the important parameters which determine the relative counting precision of the Doubles and Triples rates under the regime considered. To illustrate the similarities and differences we consider the relative standard deviation that might be anticipated for a passive correlation count of an axial section of a spent nuclear fuel assembly under practically achievable conditions.
Kaprálová-Žďánská, Petra Ruth; Šmydke, Jan; Civiš, Svatopluk
2013-09-14
Recently optimized exponentially tempered Gaussian basis sets [P. R. Kapralova-Zdanska and J. Smydke, J. Chem. Phys. 138, 024105 (2013)] are employed in quantitative simulations of helium absorption cross-sections and two-photon excitation yields of doubly excited resonances. Linearly polarized half-infinite and Gaussian laser pulses at wavelengths 38–58 nm and large intensities up to 100 TW/cm{sup 2} are considered. The emphasis is laid on convergence of the results with respect to the quality of the Gaussian basis sets (typically limited by a number of partial waves, density, and spatial extent of the basis functions) as well as to the quality of the basis set of field-free states (typically limited by the maximum rotational quantum number and maximum excitation of the lower electron). Particular attention is paid to stability of the results with respect to varying complex scaling parameter. Moreover, the study of the dynamics is preceded by a thorough check of helium energies and oscillator strengths as they are obtained with the exponentially tempered Gaussian basis sets, being also compared with yet unpublished emission wavelengths measured in electric discharge experiments.
A mathematical basis for the design and design optimization of adaptive trusses in precision control
NASA Technical Reports Server (NTRS)
Das, S. K.; Utku, S.; Chen, G.-S.; Wada, B. K.
1991-01-01
A mathematical basis for the optimal design of adaptive trusses to be used in supporting precision equipment is provided. The general theory of adaptive structures is introduced, and the global optimization problem of placing a limited number, q, of actuators, so as to maximally achieve precision control and provide prestress, is stated. Two serialized optimization problems, namely, optimal actuator placement for prestress and optimal actuator placement for precision control, are addressed. In the case of prestressing, the computation of a 'desired' prestress is discussed, the interaction between actuators and redundants in conveying the prestress is shown in its mathematical form, and a methodology for arriving at the optimal placement of actuators and additional redundants is discussed. With regard to precision control, an optimal placement scheme (for q actuators) for maximum 'authority' over the precision points is suggested. The results of the two serialized optimization problems are combined to give a suboptimal solution to the global optimization problem. A method for improving this suboptimal actuator placement scheme by iteration is presented.
Parallel axes gear set optimization in two-parameter space
NASA Astrophysics Data System (ADS)
Theberge, Y.; Cardou, A.; Cloutier, L.
1991-05-01
This paper presents a method for optimal spur and helical gear transmission design that may be used in a computer aided design (CAD) approach. The design objective is generally taken as obtaining the most compact set for a given power input and gear ratio. A mixed design procedure is employed which relies both on heuristic considerations and computer capabilities. Strength and kinematic constraints are considered in order to define the domain of feasible designs. Constraints allowed include: pinion tooth bending strength, gear tooth bending strength, surface stress (resistance to pitting), scoring resistance, pinion involute interference, gear involute interference, minimum pinion tooth thickness, minimum gear tooth thickness, and profile or transverse contact ratio. A computer program was developed which allows the user to input the problem parameters, to select the calculation procedure, to see constraint curves in graphic display, to have an objective function level curve drawn through the design space, to point at a feasible design point and to have constraint values calculated at that point. The user can also modify some of the parameters during the design process.
Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets
Jimenez, Bienvenido Novo, Vicente
2004-03-15
We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.
Dharmarajan, Venkatasubramanian; Lee, Jeong-Heon; Patel, Anamika; Skalnik, David G.; Cosgrove, Michael S.
2012-01-01
Translocations and amplifications of the mixed lineage leukemia-1 (MLL1) gene are associated with aggressive myeloid and lymphocytic leukemias in humans. MLL1 is a member of the SET1 family of histone H3 lysine 4 (H3K4) methyltransferases, which are required for transcription of genes involved in hematopoiesis and development. MLL1 associates with a subcomplex containing WDR5, RbBP5, Ash2L, and DPY-30 (WRAD), which together form the MLL1 core complex that is required for sequential mono- and dimethylation of H3K4. We previously demonstrated that WDR5 binds the conserved WDR5 interaction (Win) motif of MLL1 in vitro, an interaction that is required for the H3K4 dimethylation activity of the MLL1 core complex. In this investigation, we demonstrate that arginine 3765 of the MLL1 Win motif is required to co-immunoprecipitate WRAD from mammalian cells, suggesting that the WDR5-Win motif interaction is important for the assembly of the MLL1 core complex in vivo. We also demonstrate that peptides that mimic SET1 family Win motif sequences inhibit H3K4 dimethylation by the MLL1 core complex with varying degrees of efficiency. To understand the structural basis for these differences, we determined structures of WDR5 bound to six different naturally occurring Win motif sequences at resolutions ranging from 1.9 to 1.2 Å. Our results reveal that binding energy differences result from interactions between non-conserved residues C-terminal to the Win motif and to a lesser extent from subtle variation of residues within the Win motif. These results highlight a new class of methylation inhibitors that may be useful for the treatment of MLL1-related malignancies. PMID:22665483
Dharmarajan, Venkatasubramanian; Lee, Jeong-Heon; Patel, Anamika; Skalnik, David G; Cosgrove, Michael S
2012-08-10
Translocations and amplifications of the mixed lineage leukemia-1 (MLL1) gene are associated with aggressive myeloid and lymphocytic leukemias in humans. MLL1 is a member of the SET1 family of histone H3 lysine 4 (H3K4) methyltransferases, which are required for transcription of genes involved in hematopoiesis and development. MLL1 associates with a subcomplex containing WDR5, RbBP5, Ash2L, and DPY-30 (WRAD), which together form the MLL1 core complex that is required for sequential mono- and dimethylation of H3K4. We previously demonstrated that WDR5 binds the conserved WDR5 interaction (Win) motif of MLL1 in vitro, an interaction that is required for the H3K4 dimethylation activity of the MLL1 core complex. In this investigation, we demonstrate that arginine 3765 of the MLL1 Win motif is required to co-immunoprecipitate WRAD from mammalian cells, suggesting that the WDR5-Win motif interaction is important for the assembly of the MLL1 core complex in vivo. We also demonstrate that peptides that mimic SET1 family Win motif sequences inhibit H3K4 dimethylation by the MLL1 core complex with varying degrees of efficiency. To understand the structural basis for these differences, we determined structures of WDR5 bound to six different naturally occurring Win motif sequences at resolutions ranging from 1.9 to 1.2 Å. Our results reveal that binding energy differences result from interactions between non-conserved residues C-terminal to the Win motif and to a lesser extent from subtle variation of residues within the Win motif. These results highlight a new class of methylation inhibitors that may be useful for the treatment of MLL1-related malignancies. PMID:22665483
RANK-ORDER-SELECTIVE NEURONS FORM A TEMPORAL BASIS SET FOR THE GENERATION OF MOTOR SEQUENCES
Salinas, Emilio
2009-01-01
Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain. PMID:19357265
Optimization of massive countermeasure design in complex rockfall settings
NASA Astrophysics Data System (ADS)
Agliardi, Federico; Crosta, Giovanni B.
2015-04-01
Rockfall protection is a major need in areas impended by subvertical rockwalls with complex 3D morphology and little or no talus to provide natural rockfall attenuation. The design of massive embankments, usually required to ensure such protection, is particularly difficult in complex rockfall settings, due to: widespread occurrence of rockfall sources; difficult characterization of size distribution and location of unstable volumes; variability of failure mechanisms; spatial scattering of rockfall trajectories; high expected kinetic energies. Moreover, rockwalls in complex lithological and structural settings are often prone to mass falls related to rock mass sector collapses. All these issues may hamper a safe application of classic embankment analysis approaches, using empirical rules or 2D-based height/energy statistics, and point to the need of integrated analyses of rock slope instability and rockfall runout in 3D. We explore the potential of combining advanced rock mass characterisation techniques and 3D rockfall modelling to support challenging countermeasure design at a site near Lecco (Southern Alps, Italy). Here subvertical cliffs up to 600 m high impend on a narrow (< 150 m) strip of flat land along the Como Lake shore. Rock is thickly bedded limestone (Dolomia Principale Fm) involved in a ENE-trending, S-verging kilometre-scale anticline fold. The spatial variability of bedding attitude and fracture intensity is strongly controlled by the geological structure, with individual block sizes varying in the range 0.2-15 m3. This results in spatially variable rockfall susceptibility and mechanisms, from single block falls to mass falls. Several rockfall events occurred between 1981 and 2010 motivated the design of slope benching and a massive embankment. To support reliable design verification and optimization we performed a 3D assessment of both rock slope instability and rockfall runout. We characterised fracture patterns and rock mass quality associated
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.
Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits
Dankovic, D. A.; Naumann, B. D.; Maier, A.; Dourson, M. L.; Levy, L. S.
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties—typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.
Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has
Gonzalez, J; Rojas, I; Ortega, J; Pomares, H; Fernandez, F J; Diaz, A F
2003-01-01
This paper presents a multiobjective evolutionary algorithm to optimize radial basis function neural networks (RBFNNs) in order to approach target functions from a set of input-output pairs. The procedure allows the application of heuristics to improve the solution of the problem at hand by including some new genetic operators in the evolutionary process. These new operators are based on two well-known matrix transformations: singular value decomposition (SVD) and orthogonal least squares (OLS), which have been used to define new mutation operators that produce local or global modifications in the radial basis functions (RBFs) of the networks (the individuals in the population in the evolutionary procedure). After analyzing the efficiency of the different operators, we have shown that the global mutation operators yield an improved procedure to adjust the parameters of the RBFNNs.
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors
Kamiya, Muneaki; Hirata, So; Valiev, Marat
2008-02-19
Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of the cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.
Chen, S; Wu, Y; Luk, B L
1999-01-01
The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.
Basis set dependence using DFT/B3LYP calculations to model the Raman spectrum of thymine.
Bielecki, Jakub; Lipiec, Ewelina
2016-02-01
Raman spectroscopy (including surface enhanced Raman spectroscopy (SERS) and tip enhanced Raman spectroscopy (TERS)) is a highly promising experimental method for investigations of biomolecule damage induced by ionizing radiation. However, proper interpretation of changes in experimental spectra for complex systems is often difficult or impossible, thus Raman spectra calculations based on density functional theory (DFT) provide an invaluable tool as an additional layer of understanding of underlying processes. There are many works that address the problem of basis set dependence for energy and bond length consideration, nevertheless there is still lack of consistent research on basis set influence on Raman spectra intensities for biomolecules. This study fills this gap by investigating of the influence of basis set choice for the interpretation of Raman spectra of the thymine molecule calculated using the DFT/B3LYP framework and comparing these results with experimental spectra. Among 19 selected Pople's basis sets, the best agreement was achieved using 6-31[Formula: see text](d,p), 6-31[Formula: see text](d,p) and 6-11[Formula: see text]G(d,p) sets. Adding diffuse functions or polarized functions for small basis set or use of a medium or large basis set without diffuse or polarized functions is not sufficient to reproduce Raman intensities correctly. The introduction of the diffuse functions ([Formula: see text]) on hydrogen atoms is not necessary for gas phase calculations. This work serves as a benchmark for further research on the interaction of ionizing radiation with DNA molecules by means of ab initio calculations and Raman spectroscopy. Moreover, this work provides a set of new scaling factors for Raman spectra calculation in the framework of DFT/B3LYP method.
NASA Astrophysics Data System (ADS)
Kobus, J.; Moncrieff, D.; Wilson, S.
2004-02-01
In a previous paper, we have made a comparison of the accuracy with which the electric dipole polarizability agrzz and hyperpolarizability bgrzzz can be calculated by using either the finite basis set approach (the algebraic approximation) or the finite difference method in calculations for the ground states of the H2, LiH, BH and FH molecules, at their respective experimental equilibrium geometries, within the Hartree-Fock model. A re-examination of the hyperpolarizability of the BH molecule shows it to be very sensitive both to the choice of grid employed in the finite difference Hartree-Fock calculation and the construction of the basis set used in the matrix Hartree-Fock study. A new comparison of finite difference and finite basis set hyperpolarizabilities for the BH molecule is made, together with new calculations for the LiH and FH ground states.
NASA Astrophysics Data System (ADS)
Dalmasse, Kevin; Nychka, Douglas; Gibson, Sarah; Flyer, Natasha; Fan, Yuhong
2016-07-01
The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 Å and 10798 Å lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analogue. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.
NASA Astrophysics Data System (ADS)
Suleimanov, Yu V.; Tscherbul, Timur V.
2016-10-01
We explore the combined effect of the uncertainties due to the interaction potential and basis set convergence on low-temperature collisional properties of spin-polarized NH molecules in a magnetic field. We show that quantum scattering calculations with different rotational basis sets and λ-scaled interaction potentials produce qualitatively different ratios of elastic to inelastic cross sections for collision energies above 10‑3 cm‑1, leading to favorable (unfavorable) prospects for sympathetic cooling of NH molecules depending on the basis set cutoff parameter {N}{{\\max }}. The physical reason behind this effect is that the resonance widths, which determine the maximum variation of the scattering cross sections with λ, tend to depend strongly on {N}{{\\max }}. At ultralow collision energies, all basis sets produce highly uncertain (and statistically indistinguishable) elastic and inelastic cross sections; however, their ratio γ is much less sensitive to small variations of the interaction potential. Our results highlight the importance of basis set convergence in quantum scattering calculations and establish the existence of parameter regimes where unconverged calculations can still be used to make qualitatively accurate predictions of scattering observables.
Evaluation of European air quality modelled by CAMx including the volatility basis set scheme
NASA Astrophysics Data System (ADS)
Ciarelli, Giancarlo; Aksoyoglu, Sebnem; Crippa, Monica; Jimenez, Jose-Luis; Nemitz, Eriko; Sellegri, Karine; Äijälä, Mikko; Carbone, Samara; Mohr, Claudia; O'Dowd, Colin; Poulain, Laurent; Baltensperger, Urs; Prévôt, André S. H.
2016-08-01
Four periods of EMEP (European Monitoring and Evaluation Programme) intensive measurement campaigns (June 2006, January 2007, September-October 2008 and February-March 2009) were modelled using the regional air quality model CAMx with VBS (volatility basis set) approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February-March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA). Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2) and ozone (O3) were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January-February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively). In contrast, nitrogen dioxide (NO2) and carbon monoxide (CO) were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from -2.1 to 1.0 µg m-3. Comparisons with AMS (aerosol mass spectrometer) measurements at different sites in Europe during February-March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February-March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing model and observations into better agreement
NASA Astrophysics Data System (ADS)
Turovtsev, V. V.; Orlov, Yu. D.; Tsirulev, A. N.
2015-08-01
The advantages of the orthonormal basis set of 2π-periodic Mathieu functions compared to the trigonometric basis set in calculations of torsional states of molecules are substantiated. Explicit expressions are derived for calculating the Hamiltonian matrix elements of a one-dimensional torsional Schrödinger equation with a periodic potential of the general form in the basis set of Mathieu functions. It is shown that variation of a parameter of Mathieu functions allows the rotation potential and the structural function to be approximated with a good accuracy by a small number of series terms. The conditions for the best choice of this parameter are specified, and approximations are obtained for torsional potentials of n-butane upon rotation about the central C-C bond and of its univalent radical n-butyl C2H5C·H2 upon rotation of the C·H2 group. All algorithms are implemented in the Maple package.
Reuter, Matthew G; Harrison, Robert J
2013-09-21
We revisit the derivation of electron transport theories with a focus on the projection operators chosen to partition the system. The prevailing choice of assigning each computational basis function to a region causes two problems. First, this choice generally results in oblique projection operators, which are non-Hermitian and violate implicit assumptions in the derivation. Second, these operators are defined with the physically insignificant basis set and, as such, preclude a well-defined basis set limit. We thus advocate for the selection of physically motivated, orthogonal projection operators (which are Hermitian) and present an operator-based derivation of electron transport theories. Unlike the conventional, matrix-based approaches, this derivation requires no knowledge of the computational basis set. In this process, we also find that common transport formalisms for nonorthogonal basis sets improperly decouple the exterior regions, leading to a short circuit through the system. We finally discuss the implications of these results for first-principles calculations of electron transport.
Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon
2015-06-07
The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.
Optimization of people evacuation plans on the basis of wireless sensor networks
NASA Astrophysics Data System (ADS)
Amirgaliyev, Yedilkhan; Yunussov, Rassul; Mamyrbayev, Orken
2016-01-01
This paper introduces the optimization process for people salvation in critical situations by organizing their evacuation plan from enclosed areas using modern approaches of data acquisition on the basis of wireless sensor networks. The proposed technology allows the ability to gather information about people density on the surveyed area by the usage of wireless sensor networks, consistently covering the enclosed territory. It enables the update of the evacuation plan on the basis of people density information inside the enclosed areas online. It is proposed to use common video surveillance cameras as sensors. The advantage of visual surveillance using cameras is that it does not require additional technological equipment for the area and much more important does not impose rules and restriction on the surveillance objects (people in this case). Next tasks are to be solved: creation of mathematical model of optimal enclosed area surveillance by wireless sensors, database and data interrogation modelling of wireless sensor network, creation of algorithmic model for automated people counting using video signal for the covering area; creation of dynamic people evacuation model on the basis of maximum flow problem [1, 2].
Optimal Experience among Campers in a Resident Camp Setting.
ERIC Educational Resources Information Center
Bialeschki, M. Deborah; Henderson, Karla A.
The purpose of this study was to assess optimal experience, also known as "flow" and "quality of experience" in a private, coeducational resident camp program for children. Flow refers to those times in work and leisure when people report feelings of enjoyment, concentration, and deep involvement. Flow theory predicts that an experience will be…
NASA Astrophysics Data System (ADS)
Petković, Dalibor; Gocic, Milan; Shamshirband, Shahaboddin; Qasem, Sultan Noman; Trajkovic, Slavisa
2016-08-01
Accurate estimation of the reference evapotranspiration (ET0) is important for the water resource planning and scheduling of irrigation systems. For this purpose, the radial basis function network with particle swarm optimization (RBFN-PSO) and radial basis function network with back propagation (RBFN-BP) were used in this investigation. The FAO-56 Penman-Monteith equation was used as reference equation to estimate ET0 for Serbia during the period of 1980-2010. The obtained simulation results confirmed the proposed models and were analyzed using the root mean-square error (RMSE), the mean absolute error (MAE), and the coefficient of determination ( R 2). The analysis showed that the RBFN-PSO had better statistical characteristics than RBFN-BP and can be helpful for the ET0 estimation.
Optimal continuous variable quantum teleportation protocol for realistic settings
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2015-03-01
We show the optimal setup that allows Alice to teleport coherent states | α > to Bob giving the greatest fidelity (efficiency) when one takes into account two realistic assumptions. The first one is the fact that in any actual implementation of the continuous variable teleportation protocol (CVTP) Alice and Bob necessarily share non-maximally entangled states (two-mode finitely squeezed states). The second one assumes that Alice's pool of possible coherent states to be teleported to Bob does not cover the whole complex plane (| α | < ∞). The optimal strategy is achieved by tuning three parameters in the original CVTP, namely, Alice's beam splitter transmittance and Bob's displacements in position and momentum implemented on the teleported state. These slight changes in the protocol are currently easy to be implemented and, as we show, give considerable gain in performance for a variety of possible pool of input states with Alice.
Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F
2015-10-01
Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.
Basis set dependence of ab initio SCF elastic, Born, electron scattering cross sections for C2H4
NASA Astrophysics Data System (ADS)
Xie, Shang-de; Fink, M.; Kohl, D. A.
1984-08-01
The results of ab initio Hartree-Fock calculations of the orientationally averaged, elastic electron scattering cross section of C2H4 with six different basis sets are reported. The averaging and Fourier transform were calculated by the approach of Kohl, Pulay, and Fink. Six different basis sets, ranging from 6-31G to 6-311 G4*, were employed in the calculations. The improvement in the calculated Born cross section parallelled the lowering of the energy as the basis was varied. For C2H4, a calculation at the 6-311G** level provides a good description of the cross section at a modest expenditure of computational time.
Optimal allocation of resources in a biomarker setting.
Rosner, Bernard; Hendrickson, Sara; Willett, Walter
2015-01-30
Nutrient intake is often measured with substantial error both in commonly used surrogate instruments such as a food frequency questionnaire (FFQ) and in gold standard-type instruments such as a diet record (DR). If there is a correlated error between the FFQ and DR, then standard measurement error correction methods based on regression calibration can produce biased estimates of the regression coefficient (λ) of true intake on surrogate intake. However, if a biomarker exists and the error in the biomarker is independent of the error in the FFQ and DR, then the method of triads can be used to obtain unbiased estimates of λ, provided that there are replicate biomarker data on at least a subsample of validation study subjects. Because biomarker measurements are expensive, for a fixed budget, one can use a either design where a large number of subjects have one biomarker measure and only a small subsample is replicated or a design that has a smaller number of subjects and has most or all subjects validated. The purpose of this paper is to optimize the proportion of subjects with replicated biomarker measures, where optimization is with respect to minimizing the variance of ln(λ̂). The methodology is illustrated using vitamin C intake data from the European Prospective Investigation into Cancer and Nutrition study where plasma vitamin C is the biomarker. In this example, the optimal validation study design is to have 21% of subjects with replicated biomarker measures.
Holocene sea level variations on the basis of integration of independent data sets
Sahagian, D.; Berkman, P. . Dept. of Geological Sciences and Byrd Polar Research Center)
1992-01-01
Variations in sea level through earth history have occurred at a wide variety of time scales. Sea level researchers have attacked the problem of measuring these sea level changes through a variety of approaches, each relevant only to the time scale in question, and usually only relevant to the specific locality from which a specific type of data are derived. There is a plethora of different data types that can and have been used (locally) for the measurement of Holocene sea level variations. The problem of merging different data sets for the purpose of constructing a global eustatic sea level curve for the Holocene has not previously been adequately addressed. The authors direct the efforts to that end. Numerous studies have been published regarding Holocene sea level changes. These have involved exposed fossil reef elevations, elevation of tidal deltas, elevation of depth of intertidal peat deposits, caves, tree rings, ice cores, moraines, eolian dune ridges, marine-cut terrace elevations, marine carbonate species, tide gauges, and lake level variations. Each of these data sets is based on particular set of assumptions, and is valid for a specific set of environments. In order to obtain the most accurate possible sea level curve for the Holocene, these data sets must be merged so that local and other influences can be filtered out of each data set. Since each data set involves very different measurements, each is scaled in order to define the sensitivity of the proxy measurement parameter to sea level, including error bounds. This effectively determines the temporal and spatial resolution of each data set. The level of independence of data sets is also quantified, in order to rule out the possibility of a common non-eustatic factor affecting more than one variety of data. The Holocene sea level curve is considered to be independent of other factors affecting the proxy data, and is taken to represent the relation between global ocean water and basin volumes.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Witte, Jonathon; Neaton, Jeffrey B; Head-Gordon, Martin
2016-05-21
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions-noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms-with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Höfener, Sebastian; Bischoff, Florian A; Glöss, Andreas; Klopper, Wim
2008-06-21
In the recent years, Slater-type geminals (STGs) have been used with great success to expand the first-order wave function in an explicitly-correlated perturbation theory. The present work reports on this theory's implementation in the framework of the Turbomole suite of programs. A formalism is presented for evaluating all of the necessary molecular two-electron integrals by means of the Obara-Saika recurrence relations, which can be applied when the STG is expressed as a linear combination of a small number (n) of Gaussians (STG-nG geminal basis). In the Turbomole implementation of the theory, density fitting is employed and a complementary auxiliary basis set (CABS) is used for the resolution-of-the-identity (RI) approximation of explicitly-correlated theory. By virtue of this RI approximation, the calculation of molecular three- and four-electron integrals is avoided. An approximation is invoked to avoid the two-electron integrals over the commutator between the operators of kinetic energy and the STG. This approximation consists of computing commutators between matrices in place of operators. Integrals over commutators between operators would have occurred if the theory had been formulated and implemented as proposed originally. The new implementation in Turbomole was tested by performing a series of calculations on rotational conformers of the alkanols n-propanol through n-pentanol. Basis-set requirements concerning the orbital basis, the auxiliary basis set for density fitting and the CABS were investigated. Furthermore, various (constrained) optimizations of the amplitudes of the explicitly-correlated double excitations were studied. These amplitudes can be optimized in orbital-variant and orbital-invariant manners, or they can be kept fixed at the values governed by the rational generator approach, that is, by the electron cusp conditions. Electron-correlation effects beyond the level of second-order perturbation theory were accounted for by conventional
Optimal tooth numbers for compact standard spur gear sets
NASA Technical Reports Server (NTRS)
Savage, M.; Coy, J. J.; Townsend, D. P.
1981-01-01
The design of a standard gear mesh is treated with the objective of minimizing the gear size for a given ratio, pinion torque, and allowable tooth strength. Scoring, pitting fatigue, bending fatigue, and the kinematic limits of contact ratio and interference are considered. A design space is defined in terms of the number of teeth on the pinion and the diametral pitch. This space is then combined with the objective function of minimum center distance to obtain an optimal design region. This region defines the number of pinion teeth for the most compact design. The number is a function of the gear ratio only. A design example illustrating this procedure is also given.
An optimal period for setting sustained variability levels.
Stokes, P D; Balsam, P
2001-03-01
In two experiments, we investigated how explicit reinforcement of highly variable behavior at different points in training affected performance after the requirement was eliminated. Two versions of a computer game, differing in the number of possible solution paths, were used. In each, an optimal period of training for producing sustained high variability was found. Exposure to a high lag requirement shortly after acquisition sustained variability. Rewarding variability at other times did not have a sustained effect. The implications for learning and problem solving are discussed. PMID:11340864
A set of fortran subroutines for optimizing radiotherapy plans.
Redpath, A T; Vickery, B L; Wright, D H
1975-12-01
Quadratic Programming techniques have been applied to the optimization of radiation field weighting in Radiotherapy planning. Wedge selection has also been included by means of an exhaustive search. The radiation dose at any point in the patient may be constrained to be less than a stated percentage of the tumour dose. The routines have been successfully interfaced into a small computer interactive planning system, but they could represent an even more powerful tool in batch and time sharing systems. Minimum operator intervention is required in their use.
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
Optimizing distance-based methods for large data sets
NASA Astrophysics Data System (ADS)
Scholl, Tobias; Brenner, Thomas
2015-10-01
Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.
NASA Astrophysics Data System (ADS)
Ma, Zhonghua; Zhang, Yanli; Tuckerman, Mark E.
2012-07-01
It is generally believed that studies of liquid water using the generalized gradient approximation to density functional theory require dispersion corrections in order to obtain reasonably accurate structural and dynamical properties. Here, we report on an ab initio molecular dynamics study of water in the isothermal-isobaric ensemble using a converged discrete variable representation basis set and an empirical dispersion correction due to Grimme [J. Comp. Chem. 27, 1787 (2006)], 10.1002/jcc.20495. At 300 K and an applied pressure of 1 bar, the density obtained without dispersion corrections is approximately 0.92 g/cm3 while that obtained with dispersion corrections is 1.07 g/cm3, indicating that the empirical dispersion correction overestimates the density by almost as much as it is underestimated without the correction for this converged basis. Radial distribution functions exhibit a loss of structure in the second solvation shell. Comparison of our results with other studies using the same empirical correction suggests the cause of the discrepancy: the Grimme dispersion correction is parameterized for use with a particular basis set; this parameterization is sensitive to this choice and, therefore, is not transferable to other basis sets.
Perturbing engine performance measurements to determine optimal engine control settings
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-12-30
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.
Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set.
Bennie, Simon J; Stella, Martina; Miller, Thomas F; Manby, Frederick R
2015-07-14
Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O(N(0)) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.
NASA Astrophysics Data System (ADS)
Heaps, Charles W.; Mazziotti, David A.
2016-08-01
Quantum molecular dynamics requires an accurate representation of the molecular potential energy surface from a minimal number of electronic structure calculations, particularly for nonadiabatic dynamics where excited states are required. In this paper, we employ pseudospectral sampling of time-dependent Gaussian basis functions for the simulation of non-adiabatic dynamics. Unlike other methods, the pseudospectral Gaussian molecular dynamics tests the Schrödinger equation with N Dirac delta functions located at the centers of the Gaussian functions reducing the scaling of potential energy evaluations from O ( N 2 ) to O ( N ) . By projecting the Gaussian basis onto discrete points in space, the method is capable of efficiently and quantitatively describing the nonadiabatic population transfer and intra-surface quantum coherence. We investigate three model systems: the photodissociation of three coupled Morse oscillators, the bound state dynamics of two coupled Morse oscillators, and a two-dimensional model for collinear triatomic vibrational dynamics. In all cases, the pseudospectral Gaussian method is in quantitative agreement with numerically exact calculations. The results are promising for nonadiabatic molecular dynamics in molecular systems where strongly correlated ground or excited states require expensive electronic structure calculations.
Zhang, Jun Dolg, Michael
2014-01-28
The third-order incremental dual-basis set zero-buffer approach was combined with CCSD(T)-F12x (x = a, b) theory to develop a new approach, i.e., the inc3-db-B0-CCSD(T)-F12 method, which can be applied as a black-box procedure to efficiently obtain the near complete basis set (CBS) limit of the CCSD(T) energies also for large systems. We tested this method for several cases of different chemical nature: four complexes taken from the standard benchmark sets S66 and X40, the energy difference between isomers of water hexamer and the rotation barrier of biphenyl. The results show that our method has an error relative to the best estimation of CBS energy of only 0.2 kcal/mol or less. By parallelization, our method can accomplish the CCSD(T)-F12 calculations of about 60 correlated electrons and 800 basis functions in only several days, which by standard implementation are impossible for ordinary hardware. We conclude that the inc3-db-B0-CCSD(T)-F12a/AVTZ method, which is of CCSD(T)/AV5Z quality, is close to the limit of accuracy that one can achieve for large systems currently.
On the Kohn-Sham density response in a localized basis set
NASA Astrophysics Data System (ADS)
Foerster, Dietrich; Koval, Peter
2009-07-01
We construct the Kohn-Sham density response function χ0 in a previously described basis of the space of orbital products. The calculational complexity of our construction is O(N2Nω) for a molecule of N atoms and in a spectroscopic window of Nω frequency points. As a first application, we use χ0 to calculate the molecular spectra from the Petersilka-Gossmann-Gross equation. With χ0 as input, we obtain the correct spectra with an extra computational effort that grows also as O(N2Nω) and, therefore, less steeply in N than the O(N3) complexity of solving Casida's equations. Our construction should be useful for the study of excitons in molecular physics and in related areas where χ0 is a crucial ingredient.
NASA Astrophysics Data System (ADS)
Purwanto, Wirawan; Krakauer, Henry; Zhang, Shiwei; Virgus, Yudistira
2011-03-01
Weak H2 physisorption energies present a significant challenge to first-principle theoretical modeling and prediction of materials for H storage. There has been controversy regarding the accuracy of DFT on systems involving Ca cations. We use the auxiliary-field quantum Monte Carlo (AFQMC) method to accurately predict the binding energy of Ca + , - 4{H}2 . AFQMC scales as Nbasis3and has demonstrated accuracy similar to or better than the gold-standard coupled cluster CCSD(T) method. We apply a modified Cholesky decomposition to achieve efficient Hubbard-Stratonovich transformation in AFQMC at large basis sizes. We employ the largest correlation consistent basis sets available, up to Ca/cc-pCV5Z, to extrapolate to the complete basis limit. The calculated potential energy curve exhibits binding with a double-well structure. Supported by DOE and NSF. Calculations were performed at OLCF Jaguar and CPD.
Lipka, E; Amidon, G L
1999-11-01
The recently proposed Biopharmaceutics Classification System can be used to classify drugs and set standards for scale-up and post-approval changes as well as standards for in vitro/in vivo correlation for immediate and controlled release products. This classification scheme is based on determining the underlying process that is controlling the drug absorption rate and extent, namely, drug solubility and intestinal membrane permeability. Theoretical analysis and experimental results suggest that a permeability/solubility classification scheme can be used to set more rationale drug standards. In particular, high solubility/high permeability, rapidly dissolving drugs may be regulated on the basis of a single point rapid dissolution test while low solubility dissolution rate limited drugs can be regulated based on an in vitro dissolution test that reflects the in vivo dissolution process. This dissolution test may include multiple time points, media change, as well as surfactants in order to reflect the in vivo dissolution process and would be used by the manufacturer for requesting a waiver from a bioequivalence (BE) trial. For controlled release products, the regulation of bioequivalence standards is more complex due to the potential differences in position-dependent permeability/solubility and metabolism of drugs along the gastrointestinal tract. These differences may result in drug absorption rates that are highly transit time dependent. This paper will present the current status of the biopharmaceutic drug classification scheme, the underlying developed data base and its application to optimizing IR and CR products.
Liquid Water through Density-Functional Molecular Dynamics: Plane-Wave vs Atomic-Orbital Basis Sets.
Miceli, Giacomo; Hutter, Jürg; Pasquarello, Alfredo
2016-08-01
We determine and compare structural, dynamical, and electronic properties of liquid water at near ambient conditions through density-functional molecular dynamics simulations, when using either plane-wave or atomic-orbital basis sets. In both frameworks, the electronic structure and the atomic forces are self-consistently determined within the same theoretical scheme based on a nonlocal density functional accounting for van der Waals interactions. The overall properties of liquid water achieved within the two frameworks are in excellent agreement with each other. Thus, our study supports that implementations with plane-wave or atomic-orbital basis sets yield equivalent results and can be used indiscriminately in study of liquid water or aqueous solutions.
Liquid Water through Density-Functional Molecular Dynamics: Plane-Wave vs Atomic-Orbital Basis Sets.
Miceli, Giacomo; Hutter, Jürg; Pasquarello, Alfredo
2016-08-01
We determine and compare structural, dynamical, and electronic properties of liquid water at near ambient conditions through density-functional molecular dynamics simulations, when using either plane-wave or atomic-orbital basis sets. In both frameworks, the electronic structure and the atomic forces are self-consistently determined within the same theoretical scheme based on a nonlocal density functional accounting for van der Waals interactions. The overall properties of liquid water achieved within the two frameworks are in excellent agreement with each other. Thus, our study supports that implementations with plane-wave or atomic-orbital basis sets yield equivalent results and can be used indiscriminately in study of liquid water or aqueous solutions. PMID:27434607
NASA Technical Reports Server (NTRS)
Almloef, Jan; Taylor, Peter R.
1989-01-01
A recently proposed scheme for using natural orbitals from atomic configuration interaction (CI) wave functions as a basis set for linear combination of atomic orbitals (LCAO) calculations is extended for the calculation of molecular properties. For one-electron properties like multipole moments, which are determined largely by the outermost regions of the molecular wave function, it is necessary to increase the flexibility of the basis in these regions. This is most easily done by uncontracting the outmost Gaussian primitives, and/or by adding diffuse primitives. A similar approach can be employed for the calculation of polarizabilities. Properties which are not dominated by the long-range part of the wave function, such as spectroscopic constants or electric field gradients at the nucleus, can generally be treated satisfactorily with the original atomic natural orbital (ANO) sets.
NASA Astrophysics Data System (ADS)
Jorge, F. E.; Martins, L. S. C.; Franco, M. L.
2016-01-01
Segmented all-electron basis sets of valence double zeta quality plus polarization functions (DZP) for the elements from Ce to Lu are generated to be used with the non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. At the B3LYP level, the DZP-DKH atomic ionization energies and equilibrium bond lengths and atomization energies of the lanthanide trifluorides are evaluated and compared with benchmark theoretical and experimental data reported in the literature. In general, this compact size set shows to have a regular, efficient, and reliable performance. It can be particularly useful in molecular property calculations that require explicit treatment of the core electrons.
Xu, Xuefei; Truhlar, Donald G
2011-09-13
For molecules containing the fourth-period element arsenic, we test (i, ii) the accuracy of all-electron (AE) basis sets from the def2-xZVP and ma-xZVP series (where xZ is S, TZ, or QZ), (iii) the accuracy of the 6-311G series of AE basis sets with additional polarization and diffuse functions, and (iv) the performance of effective core potentials (ECPs). The first set of tests involves basis-set convergence studies with eleven density functionals for five cases: equilibrium dissociation energy (De) of As2, vertical ionization potential (VIP) of As2, IP of As, acid dissociation of H3AsO4, and De of FeAs. A second set of tests involves the same kinds of basis-set convergence studies for the VIP and De values of As3 and As4 clusters. Both relativistic and nonrelativistic calculations are considered, including in each case both AE calculations and calculations with ECPs. Convergence and accuracy are assessed by comparing to relativistic AE calculations with the cc-pV5Z-DK or ma-cc-pV5Z-DK basis and to nonrelativistic AE calculations with the cc-pV5Z or ma-cc-pV5Z basis. The primary objective of this study is to evaluate the abilities of ECPs with both their recommended basis sets and other basis sets to reproduce the results of all-electron relativistic calculations. The performance of the def2 and ma series basis sets is consistent with their sizes, and quadruple-ζ basis sets are the best. The def2-TZVP basis set performs better than most of the 6-311G series basis sets, which are the most commonly used basis sets in the previous studies of arsenic compounds. However, relativistic def2-TZVP calculations are not recommended. The large-core ECPs, which are the only available ECPs for arsenic in the popular Gaussian program, have average errors of 9-12 kcal/mol for the arsenic systems studied; therefore, these ECPs are not recommended. The triple-ζ small-core relativistic ECP (RECP) basis set cc-pVTZ-PP is found to have performance better than that of the def2-TZVP
Řezáč, Jan; de la Lande, Aurélien
2015-02-10
Separation of the energetic contribution of charge transfer to interaction energy in noncovalent complexes would provide important insight into the mechanisms of the interaction. However, the calculation of charge-transfer energy is not an easy task. It is not a physically well-defined term, and the results might depend on how it is described in practice. Commonly, the charge transfer is defined in terms of molecular orbitals; in this framework, however, the charge transfer vanishes as the basis set size increases toward the complete basis set limit. This can be avoided by defining the charge transfer in terms of the spatial extent of the electron densities of the interacting molecules, but the schemes used so far do not reflect the actual electronic structure of each particular system and thus are not reliable. We propose a spatial partitioning of the system, which is based on a charge transfer-free reference state, namely superimposition of electron densities of the noninteracting fragments. We show that this method, employing constrained DFT for the calculation of the charge-transfer energy, yields reliable results and is robust with respect to the strength of the charge transfer, the basis set size, and the DFT functional used. Because it is based on DFT, the method is applicable to rather large systems.
Molecule intrinsic minimal basis sets. II. Bonding analyses for Si4H6 and Si2 to Si10.
Lu, W C; Wang, C Z; Schmidt, M W; Bytautas, L; Ho, K M; Ruedenberg, K
2004-02-01
The method, introduced in the preceding paper, for recasting molecular self-consistent field (SCF) or density functional theory (DFT) orbitals in terms of intrinsic minimal bases of quasiatomic orbitals, which differ only little from the optimal free-atom minimal-basis orbitals, is used to elucidate the bonding in several silicon clusters. The applications show that the quasiatomic orbitals deviate from the minimal-basis SCF orbitals of the free atoms by only very small deformations and that the latter arise mainly from bonded neighbor atoms. The Mulliken population analysis in terms of the quasiatomic minimal-basis orbitals leads to a quantum mechanical interpretation of small-ring strain in terms of antibonding encroachments of localized molecular-orbitals and identifies the origin of the bond-stretch isomerization in Si4H6. In the virtual SCF/DFT orbital space, the method places the qualitative notion of virtual valence orbitals on a firm basis and provides an unambiguous ab initio identification of the frontier orbitals.
NASA Astrophysics Data System (ADS)
Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth
2016-09-01
The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.
Ermler, Walter V.; Tilson, Jeffrey L.
2012-12-15
A procedure for structuring generally contracted valence-core/valence basis sets of Gaussian-type functions for use with relativistic effective core potentials (gcv-c/v-RECP basis sets) is presented. Large valence basis sets are enhanced using a compact basis set derived for outer core electrons in the presence of small-core RECPs. When core electrons are represented by relativistic effective core potentials (RECPs), and appropriate levels of theory, these basis sets are shown to provide accurate representations of atomic and molecular valence and outer-core electrons. Core/valence polarization and correlation effects can be calculated using these basis sets through standard methods for treating electron correlation. Calculations of energies and spectra for Ru, Os, Ir, In and Cs are reported. Spectroscopic constants for RuO2+, OsO2+, Cs2 and InH are calculated and compared with experiment.
NASA Astrophysics Data System (ADS)
Peterson, Kirk A.; Figgen, Detlev; Dolg, Michael; Stoll, Hermann
2007-03-01
Scalar-relativistic pseudopotentials and corresponding spin-orbit potentials of the energy-consistent variety have been adjusted for the simulation of the [Ar]3d10 cores of the 4d transition metal elements Y-Pd. These potentials have been determined in a one-step procedure using numerical two-component calculations so as to reproduce atomic valence spectra from four-component all-electron calculations. The latter have been performed at the multi-configuration Dirac-Hartree-Fock level, using the Dirac-Coulomb Hamiltonian and perturbatively including the Breit interaction. The derived pseudopotentials reproduce the all-electron reference data with an average accuracy of 0.03eV for configurational averages over nonrelativistic orbital configurations and 0.1eV for individual relativistic states. Basis sets following a correlation consistent prescription have also been developed to accompany the new pseudopotentials. These range in size from cc-pVDZ-PP to cc-pV5Z-PP and also include sets for 4s4p correlation (cc-pwCVDZ-PP through cc-pwCV5Z-PP), as well as those with extra diffuse functions (aug-cc-pVDZ-PP, etc.). In order to accurately assess the impact of the pseudopotential approximation, all-electron basis sets of triple-zeta quality have also been developed using the Douglas-Kroll-Hess Hamiltonian (cc-pVTZ-DK, cc-pwCVTZ-DK, and aug-cc-pVTZ-DK). Benchmark calculations of atomic ionization potentials and 4dm -25s2→4dm -15s1 electronic excitation energies are reported at the coupled cluster level of theory with extrapolations to the complete basis set limit.
Peterson, Kirk A; Figgen, Detlev; Dolg, Michael; Stoll, Hermann
2007-03-28
Scalar-relativistic pseudopotentials and corresponding spin-orbit potentials of the energy-consistent variety have been adjusted for the simulation of the [Ar]3d(10) cores of the 4d transition metal elements Y-Pd. These potentials have been determined in a one-step procedure using numerical two-component calculations so as to reproduce atomic valence spectra from four-component all-electron calculations. The latter have been performed at the multi-configuration Dirac-Hartree-Fock level, using the Dirac-Coulomb Hamiltonian and perturbatively including the Breit interaction. The derived pseudopotentials reproduce the all-electron reference data with an average accuracy of 0.03 eV for configurational averages over nonrelativistic orbital configurations and 0.1 eV for individual relativistic states. Basis sets following a correlation consistent prescription have also been developed to accompany the new pseudopotentials. These range in size from cc-pVDZ-PP to cc-pV5Z-PP and also include sets for 4s4p correlation (cc-pwCVDZ-PP through cc-pwCV5Z-PP), as well as those with extra diffuse functions (aug-cc-pVDZ-PP, etc.). In order to accurately assess the impact of the pseudopotential approximation, all-electron basis sets of triple-zeta quality have also been developed using the Douglas-Kroll-Hess Hamiltonian (cc-pVTZ-DK, cc-pwCVTZ-DK, and aug-cc-pVTZ-DK). Benchmark calculations of atomic ionization potentials and 4d(m-2)5s(2)-->4d(m-1)5s(1) electronic excitation energies are reported at the coupled cluster level of theory with extrapolations to the complete basis set limit. PMID:17411102
The study of the optimal parameter settings in a hospital supply chain system in Taiwan.
Liao, Hung-Chang; Chen, Meng-Hao; Wang, Ya-huei
2014-01-01
This study proposed the optimal parameter settings for the hospital supply chain system (HSCS) when either the total system cost (TSC) or patient safety level (PSL) (or both simultaneously) was considered as the measure of the HSCS's performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM) was used to construct the regression model, and a genetic algorithm (GA) was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers. PMID:25250397
The Study of the Optimal Parameter Settings in a Hospital Supply Chain System in Taiwan
Liao, Hung-Chang; Chen, Meng-Hao; Wang, Ya-huei
2014-01-01
This study proposed the optimal parameter settings for the hospital supply chain system (HSCS) when either the total system cost (TSC) or patient safety level (PSL) (or both simultaneously) was considered as the measure of the HSCS's performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM) was used to construct the regression model, and a genetic algorithm (GA) was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers. PMID:25250397
Moore, B.A.
2008-07-01
The U.S. Department of Energy (DOE), Office of Environmental Management (EM) conducted its first Remediation Process Optimization (RPO) review in 2004, following the methodology and guidance issued by the Interstate Technology and Regulatory Council (ITRC). The intent of this paper is to briefly summarize the following: (1) the overall benefits of the review process toward improving remedial effectiveness and efficiency at DOE, (2) the types and objectives of completed reviews, and (3) how RPO facilitates technology transfer and is complementary to performance-based environmental management (PBEM). Contract reform began in 1993, at the U.S. Department of Energy (DOE) as a result of the Government Performance and Results Act, followed by recommendations from oversight groups. The precedence of using management and operating (M and O) contracts at DOE facilities, exempt from open competition by definition, shifted to performance basis. Since 1994, DOE has competed over 70% of its then existing M and O contracts; most now contain specific performance objectives, measures, and targets that focus on results in mission critical areas. In 2001, DOE's Office of Environmental Management (EM) integrated performance-based contracting as a core business process. EM resolved to manage cleanup as a project, encourage innovative contracting strategies, as well as incentives to accelerate risk reduction and cleanup. DOE's efforts to implement performance-based environmental management (PBEM) were further realized by establishing a centralized Office of Acquisitions and the Consolidated Business Center, which support all EM procurements. In 2004, EM began conducting Remediation Process Optimization (RPO) reviews at select field sites, following the methodology published in the 2004 ITRC guidance. To date, EM has completed nine RPO reviews, including: pump and treat system optimization, monitoring program optimization, in situ barrier performance improvement, remedial design review
Kruse, Holger; Grimme, Stefan
2012-04-21
chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website. PMID:22519309
Analysis of the optimal laminated target made up of discrete set of materials
NASA Technical Reports Server (NTRS)
Aptukov, Valery N.; Belousov, Valentin L.
1991-01-01
A new class of problems was analyzed to estimate an optimal structure of laminated targets fabricated from the specified set of homogeneous materials. An approximate description of the perforation process is based on the model of radial hole extension. The problem is solved by using the needle-type variation technique. The desired optimization conditions and quantitative/qualitative estimations of optimal targets were obtained and are discussed using specific examples.
Witte, Jonathon; Neaton, Jeffrey B; Head-Gordon, Martin
2016-05-21
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions-noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms-with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems. PMID:27208948
Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.
González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A
2015-08-11
Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity.
Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c
González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A.
2015-01-01
Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin’s transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ’s histone chaperone activity. PMID:26216969
Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.
González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A
2015-08-11
Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity. PMID:26216969
NASA Astrophysics Data System (ADS)
Simon, Sílvia; Duran, Miquel
1997-08-01
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.
Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming
2015-02-17
We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.
Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming
2015-02-17
We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer. PMID:25581402
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
ERIC Educational Resources Information Center
Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…
Basis-set limit binding energies of Be{sub n} and Mg{sub n} (n=2,3,4) clusters
Lee, Jae Shin
2003-10-01
The general applicability of the basis set and correlation-dependent extrapolation method [J. Chem. Phys. 118, 3035 (2003)], which fits two successive correlation energies with correlation-consistent cc-pVXZ and cc-pV(X+1)Z basis sets [X=D(2),T(3),Q(4)] by (X+k){sup -3} with varying k according to basis-set quality and correlation level, was explored by examining the basis-set limit binding energies of the metallic clusters of Be{sub n} and Mg{sub n} (n=2,3,4) at the MP2 (second-order Moller-Plesset perturbation theory) and CCSD(T) (single and double coupled cluster method with perturbative triple correction) level. The comparison of the extrapolated basis-set limit estimates with the highly accurate reference basis-set limits suggests that the extrapolation of correlation contributions of binding energies with only cc-pVDZ and cc-pVTZ basis sets already yields the basis-set limit estimates close to the reference complete basis-set limits within 1 m-hartree in most cases, signifying the utility of this extrapolation method in the study of larger clusters. The natural extension of this extrapolation method to all electron correlated computations including core electrons with core-valence cc-pCVXZ basis sets is also made, which shows the similar accuracy of the extrapolated estimates for all electron correlated results as the valence electron only correlated results. The comparison of the MP2 and CCSD(T) basis-set limit binding energies with DFT (density-functional theory) results manifests the capability and limitation of the current DFT methods in studying the binding of such clusters.
Aerostructural Level Set Topology Optimization for a Common Research Model Wing
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2014-01-01
The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.
Generation of pareto optimal ensembles of calibrated parameter sets for climate models.
Dalbey, Keith R.; Levy, Michael Nathan
2010-12-01
Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.
Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2010-06-16
A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp{sup 2} hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al{sub 3}V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.
Cost optimization for series-parallel execution of a collection of intersecting operation sets
NASA Astrophysics Data System (ADS)
Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor
2016-05-01
A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.
Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold
2015-09-01
In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. PMID:26163042
Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold
2015-09-01
In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate.
NASA Astrophysics Data System (ADS)
Thomas, Patrick Ryan
Large simulation cell sizes, relativistic effects, and the need to correctly model excited state properties are major impediments to the accurate prediction of the optical properties of candidate materials for solid-state laser crystal and luminescent applications. To overcome these challenges, new methods must be created to improve the electron orbital wavefunction and interactions. In this work, a method has been developed to create new analytical four-component, fully-relativistic and single-component scalar relativistic descriptions of the atomic orbital wave functions from Grasp2K numerically represented atomic orbitals. In addition, adapted theory for the calculation of the relativistic kinetic energy contribution to Hamiltonian which bypasses directly solving the Dirac equation has been explicated. The orbital description improvements are tested against YAG, YBCO, SnO2 and BiF3. The improvements to the basis set reflect an improvement in both computational speed and accuracy.
NASA Astrophysics Data System (ADS)
Guan, Qingze; Blume, Doerte
2016-05-01
The explicit correlated Gaussian (ECG) basis set expansion approach is a variational approach that has been used in various areas, including molecular, nuclear, atomic, and chemical physics. In the world of cold atoms, e.g., the ECG approach has been used to calculate the eigenenergies and eigenstates of few-body systems governed by Efimov physics. Since the first experimental realization of synthesized gauge fields, few-body systems with spin-orbit coupling have attracted a great deal of attention. Here, the ECG approach is customized to few-body systems with both short-range interactions and spin-orbit couplings. Benchmark tests and a performance analysis will be presented. Support by the NSF is gratefully acknowledged.
Many-body basis-set reduction applied to the two-dimensional t-Jz model
NASA Astrophysics Data System (ADS)
Riera, J.; Dagotto, E.
1993-06-01
A simple variation of the Lanczos method is discussed. The technique is based on a systematic reduction of the size of the Hilbert space of the model under consideration, and it has many similarities with the basis-set-reduction approach recently introduced by Wenzel and Wilson in the context of quantum chemistry. As an example, the two-dimensional t-Jz model of strongly correlated electrons is studied. Accurate results for the ground-state energy can be obtained on clusters of up to 50 sites, which are unreachable by conventional Lanczos approaches. In particular, the energy of one and two holes is analyzed as a function of Jz/t. In the bulk limit, the numerical results suggest that a finite coupling Jz/t]c~0.18 is necessary to induce ``binding'' of holes in the model.
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-01
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs. PMID:26589153
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-01
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2001-01-01
Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.
NASA Astrophysics Data System (ADS)
Oberhofer, Harald; Blumberger, Jochen
2010-12-01
We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.
Cignetti, Fabien; Salvia, Emilie; Anton, Jean-Luc; Grosbras, Marie-Hélène; Assaiante, Christine
2016-01-01
Conventional analysis of functional magnetic resonance imaging (fMRI) data using the general linear model (GLM) employs a neural model convolved with a canonical hemodynamic response function (HRF) peaking 5 s after stimulation. Incorporation of a further basis function, namely the canonical HRF temporal derivative, accounts for delays in the hemodynamic response to neural activity. A population that may benefit from this flexible approach is children whose hemodynamic response is not yet mature. Here, we examined the effects of using the set based on the canonical HRF plus its temporal derivative on both first- and second-level GLM analyses, through simulations and using developmental data (an fMRI dataset on proprioceptive mapping in children and adults). Simulations of delayed fMRI first-level data emphasized the benefit of carrying forward to the second-level a derivative boost that combines derivative and nonderivative beta estimates. In the experimental data, second-level analysis using a paired t-test showed increased mean amplitude estimate (i.e., increased group contrast mean) in several brain regions related to proprioceptive processing when using the derivative boost compared to using only the nonderivative term. This was true especially in children. However, carrying forward to the second-level the individual derivative boosts had adverse consequences on random-effects analysis that implemented one-sample t-test, yielding increased between-subject variance, thus affecting group-level statistic. Boosted data also presented a lower level of smoothness that had implication for the detection of group average activation. Imposing soft constraints on the derivative boost by limiting the time-to-peak range of the modeled response within a specified range (i.e., 4–6 s) mitigated these issues. These findings support the notion that there are pros and cons to using the informed basis set with developmental data. PMID:27471441
Fletcher, Jordan M; Boyle, Aimee L; Bruning, Marc; Bartlett, Gail J; Vincent, Thomas L; Zaccai, Nathan R; Armstrong, Craig T; Bromley, Elizabeth H C; Booth, Paula J; Brady, R Leo; Thomson, Andrew R; Woolfson, Derek N
2012-06-15
Protein engineering, chemical biology, and synthetic biology would benefit from toolkits of peptide and protein components that could be exchanged reliably between systems while maintaining their structural and functional integrity. Ideally, such components should be highly defined and predictable in all respects of sequence, structure, stability, interactions, and function. To establish one such toolkit, here we present a basis set of de novo designed α-helical coiled-coil peptides that adopt defined and well-characterized parallel dimeric, trimeric, and tetrameric states. The designs are based on sequence-to-structure relationships both from the literature and analysis of a database of known coiled-coil X-ray crystal structures. These give foreground sequences to specify the targeted oligomer state. A key feature of the design process is that sequence positions outside of these sites are considered non-essential for structural specificity; as such, they are referred to as the background, are kept non-descript, and are available for mutation as required later. Synthetic peptides were characterized in solution by circular-dichroism spectroscopy and analytical ultracentrifugation, and their structures were determined by X-ray crystallography. Intriguingly, a hitherto widely used empirical rule-of-thumb for coiled-coil dimer specification does not hold in the designed system. However, the desired oligomeric state is achieved by database-informed redesign of that particular foreground and confirmed experimentally. We envisage that the basis set will be of use in directing and controlling protein assembly, with potential applications in chemical and synthetic biology. To help with such endeavors, we introduce Pcomp, an on-line registry of peptide components for protein-design and synthetic-biology applications. PMID:23651206
Cignetti, Fabien; Salvia, Emilie; Anton, Jean-Luc; Grosbras, Marie-Hélène; Assaiante, Christine
2016-01-01
Conventional analysis of functional magnetic resonance imaging (fMRI) data using the general linear model (GLM) employs a neural model convolved with a canonical hemodynamic response function (HRF) peaking 5 s after stimulation. Incorporation of a further basis function, namely the canonical HRF temporal derivative, accounts for delays in the hemodynamic response to neural activity. A population that may benefit from this flexible approach is children whose hemodynamic response is not yet mature. Here, we examined the effects of using the set based on the canonical HRF plus its temporal derivative on both first- and second-level GLM analyses, through simulations and using developmental data (an fMRI dataset on proprioceptive mapping in children and adults). Simulations of delayed fMRI first-level data emphasized the benefit of carrying forward to the second-level a derivative boost that combines derivative and nonderivative beta estimates. In the experimental data, second-level analysis using a paired t-test showed increased mean amplitude estimate (i.e., increased group contrast mean) in several brain regions related to proprioceptive processing when using the derivative boost compared to using only the nonderivative term. This was true especially in children. However, carrying forward to the second-level the individual derivative boosts had adverse consequences on random-effects analysis that implemented one-sample t-test, yielding increased between-subject variance, thus affecting group-level statistic. Boosted data also presented a lower level of smoothness that had implication for the detection of group average activation. Imposing soft constraints on the derivative boost by limiting the time-to-peak range of the modeled response within a specified range (i.e., 4-6 s) mitigated these issues. These findings support the notion that there are pros and cons to using the informed basis set with developmental data.
Fujii, Garuda; Ueta, Tsuyoshi; Mizuno, Mamoru; Nakamura, Masayuki
2015-05-01
Topology-optimized designs of multiple-disk resonators are presented using level-set expression that incorporates surface effects. Effects from total internal reflection at the surfaces of the dielectric disks are precisely simulated by modeling clearly defined dielectric boundaries during topology optimization. The electric field intensity in optimal resonators increases to more than four and a half times the initial intensity in a resonant state, whereas in some cases the Q factor increases by three and a half times that for the initial state. Wavelength-scale link structures between neighboring disks improve the performance of the multiple-disk resonators. PMID:25969226
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.
2008-01-01
In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.
A variational level set method for the topology optimization of steady-state Navier Stokes flow
NASA Astrophysics Data System (ADS)
Zhou, Shiwei; Li, Qing
2008-12-01
The smoothness of topological interfaces often largely affects the fluid optimization and sometimes makes the density-based approaches, though well established in structural designs, inadequate. This paper presents a level-set method for topology optimization of steady-state Navier-Stokes flow subject to a specific fluid volume constraint. The solid-fluid interface is implicitly characterized by a zero-level contour of a higher-order scalar level set function and can be naturally transformed to other configurations as its host moves. A variational form of the cost function is constructed based upon the adjoint variable and Lagrangian multiplier techniques. To satisfy the volume constraint effectively, the Lagrangian multiplier derived from the first-order approximation of the cost function is amended by the bisection algorithm. The procedure allows evolving initial design to an optimal shape and/or topology by solving the Hamilton-Jacobi equation. Two classes of benchmarking examples are presented in this paper: (1) periodic microstructural material design for the maximum permeability; and (2) topology optimization of flow channels for minimizing energy dissipation. A number of 2D and 3D examples well demonstrated the feasibility and advantage of the level-set method in solving fluid-solid shape and topology optimization problems.
Keller, J.; Blarigan, P. Van
1998-08-01
In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end the authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.
Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun
2015-01-01
Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382
Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun
2015-01-01
Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm.
Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun
2015-01-01
Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Rodgers, Michael
2008-04-01
Using matched filters to find targets in cluttered images is an old idea. Human operators can interactively find threshold values to be applied to the correlation surface that will do a good job of binarizing it into signal/non-signal pixel regions. Automating the thresholding process with nine measured image statistics is the goal of this paper. The nine values are the mean, maximum, and standard deviation of three images: the input image presumed to have some signal, an NxN matched filter kernel in the shape of the signal, and the correlation surface generated by convolving the input image with the matched filter kernel. Several thousand input images with known target locations and reference images were run through a correlator with kernels that resembled the targets. The nine numbers referred to above were calculated in addition to a threshold found with a time consuming brutal algorithm. Multidimensional radial basis functions were associated with each nine number set. The bump height corresponded to the threshold value. The bump location was within a nine dimensional hypercube corresponding to the nine numbers scaled so that all the data fell within the interval 0 to 1 on each axis. The sigma (sharpness of the radial basis function) was calculated as a fraction of the squared distance to the closest neighboring bump. A new threshold is calculated as a weighted sum of all the Gaussian bumps in the vicinity of the input 9D vector. The paper will conclude with a table of results using this method compared to other methods.
Topology-optimized carpet cloaks based on a level-set boundary expression
NASA Astrophysics Data System (ADS)
Fujii, Garuda; Ueta, Tsuyoshi
2016-10-01
The concept of topology-optimized carpet cloaks is presented using level-set boundary expressions. Specifically, these carpet cloaks are designed with the idea of minimizing the value of an objective functional, which is here defined as the integrated intensity of the difference between the electric field reflected by a flat plane and that controlled by the carpet cloak. Made of dielectric material, our cloaks are designed to imitate reflections from a flat plane, and with some cloaks, the value of the objective functional falls below 0.12 % of that for a bare bump on a flat plane. These optimal carpet cloaks spontaneously satisfy the time-reversal symmetry of the scattered field during the optimization process. The profiles associated with optimal configurations are controlled by adjusting a regularization parameter. With this approach, a variety of configurations with different structural characteristics can be obtained.
Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.
Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang
2015-07-03
The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor.
NASA Astrophysics Data System (ADS)
Maroulis, George
1998-06-01
A large (18s 13p 8d 5f / 12s 7p 3d 2f) basis set consisting of 256 uncontracted gaussian-type functions is expected to yield values near the Hartree-Fock limit for the static hyperpolarizability of H 2O: βzxx=-9.40, βzyy=-1.35, βzzz=-7.71 and β¯=-11.07 for βαβγ ( e3a03Eh-2) and γxxxx=569, γyyyy=1422, γzzzz=907, γxxyy=338, γyyzz=389, γzzxx=287 and γ¯=985 for γαβγδ ( e4a04Eh-3) at the experimental equilibrium geometry (with z as the C 2 axis, molecule on the xz plane). The respective electron correlation corrections obtained with the single, double and perturbatively linked triple excitations coupled-cluster method and a [9s 6p 6d 3f / 6s 4p 2d 1f] basis set are βzxx=-0.45, βzyy=-4.19, βzzz=-6.09, β¯=-6.44 and γxxxx=267, γyyyy=1228, γzzzz=574, γxxyy=295, γyyzz=322, γzzxx=152, γ¯=721 . For the static limit we propose β¯=-17.5±0.3 e3a03Eh-2 and γ¯=(171±6)×10 1e4a04Eh-3, in near agreement with the experimental findings of β¯=-19.2±0.9 e3a03Eh-2 and γ¯=1800±150 e4a04Eh-3 deduced from EFISH measurements at 1064 nm by Kaatz et al. [P. Kaatz, E.A. Donley, D.P. Shelton, J. Chem. Phys. 108 (1998) 849].
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS PROCEDURES FOR REIMBURSEMENT OF GENERAL AVIATION OPERATORS AND SERVICE PROVIDERS IN THE WASHINGTON, DC AREA Set-Aside for Operators or Providers at Certain Airports § 331.35 What is the basis...
NASA Astrophysics Data System (ADS)
Fillenwarth, Brian Albert
As large countries such as China begin to industrialize and concerns about global warming continue to grow, there is an increasing need for more environmentally friendly building materials. One promising material known as a geopolymer can be used as a portland cement replacement and in this capacity emits around 67% less carbon dioxide. In addition to potentially reducing carbon emissions, geopolymers can be synthesized with many industrial waste products such as fly ash. Although the benefits of geopolymers are substantial, there are a few difficulties with designing geopolymer mixes which have hindered widespread commercialization of the material. One such difficulty is the high variability of the materials used for their synthesis. In addition to this, interrelationships between mix design variables and how these interrelationships impact the set behavior and compressive strength are not well understood. A third complicating factor with designing geopolymer mixes is that the role of calcium in these systems is not well understood. In order to overcome these barriers, this study developed predictive optimization models through the use of genetic programming with experimentally collected set times and compressive strengths of several geopolymer paste mixes. The developed set behavior models were shown to predict the correct set behavior from the mix design over 85% of the time. The strength optimization model was shown to be capable of predicting compressive strengths of geopolymer pastes from their mix design to within about 1 ksi of their actual strength. In addition to this the optimization models give valuable insight into the key factors influencing strength development as well as the key factors responsible for flash set and long set behaviors in geopolymer pastes. A method for designing geopolymer paste mixes was developed from the generated optimization models. This design method provides an invaluable tool for use in future geopolymer research as well as
Buczek, Aneta; Kupka, Teobald; Broda, Małgorzata A; Żyła, Adriana
2016-01-01
In this work, regular convergence patterns of the structural, harmonic, and VPT2-calculated anharmonic vibrational parameters of ethylene towards the Kohn-Sham complete basis set (KS CBS) limit are demonstrated for the first time. The performance of the VPT2 scheme implemented using density functional theory (DFT-BLYP and DFT-B3LYP) in combination with two Pople basis sets (6-311++G** and 6-311++G(3df,2pd)), the polarization-consistent basis sets pc-n, aug-pc-n, and pcseg-n (n = 0, 1, 2, 3, 4), and the correlation-consistent basis sets cc-pVXZ and aug-cc-pVXZ (X = D, T, Q, 5, 6) was tested.The BLYP-calculated harmonic frequencies were found to be markedly closer than the B3LYP-calculated harmonic frequencies to the experimentally derived values, while the calculated anharmonic frequencies consistently underestimated the observed wavenumbers. The different basis set families gave very similar estimated values for the CBS parameters. The anharmonic frequencies calculated with B3LYP/aug-pc-3 were consistently significantly higher than those obtained with the pc-3 basis set; applying the aug-pcseg-n basis set family alleviated this problem. Utilization of B3LYP/aug-pcseg-n basis sets instead of B3LYP/aug-cc-pVXZ, which is computationally less expensive, is suggested for medium-sized molecules. Harmonic BLYP/pc-2 calculations produced fairly accurate ethylene frequencies. Graphical Abstract In this study, the performance of the VPT2 scheme implemented using density functional theory (DFT-BLYP and DFT-B3LYP) in combination with the polarization-consistent basis sets pc-n, aug-pc-n, and pcseg-n (n = 0, 1, 2, 3, 4), and the correlation-consistent basis sets cc-pVXZ and aug-cc-pVXZ (X = D, T, Q, 5, and 6) was tested. For the first time, we demonstrated regular convergence patterns of the structural, harmonic, and VPT2-calculated anharmonic vibrational parameters of ethylene towards the Kohn-Sham complete basis set (KS CBS) limit.
Urquhart, B.; Sengupta, M.; Keller, J.
2012-09-01
A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.
Density and level set-XFEM schemes for topology optimization of 3-D structures
NASA Astrophysics Data System (ADS)
Villanueva, Carlos H.; Maute, Kurt
2014-07-01
As the capabilities of additive manufacturing techniques increase, topology optimization provides a promising approach to design geometrically sophisticated structures. Traditional topology optimization methods aim at finding conceptual designs, but they often do not resolve sufficiently the geometry and the structural response such that the optimized designs can be directly used for manufacturing. To overcome these limitations, this paper studies the viability of the extended finite element method (XFEM) in combination with the level-set method (LSM) for topology optimization of three dimensional structures. The LSM describes the geometry by defining the nodal level set values via explicit functions of the optimization variables. The structural response is predicted by a generalized version of the XFEM. The LSM-XFEM approach is compared against results from a traditional Solid Isotropic Material with Penalization method for two-phase "solid-void" and "solid-solid" problems. The numerical results demonstrate that the LSM-XFEM approach describes crisply the geometry and predicts the structural response with acceptable accuracy even on coarse meshes.
A new complete basis set model (CBS-QB3) study on the possible intermediates in chemiluminescence
NASA Astrophysics Data System (ADS)
Zhang, Yong; Zeng, Xi-Rui; You, Xiao-Zeng
2000-11-01
The new highly accurate complete basis set model, CBS-QB3, was employed here to elucidate the long experimentally discussed problem in a general class of chemiluminescent reactions involving peroxyoxalate systems. Both the stability comparison and the vibrational spectra favor that the intermediate is better to be recognized as the cyclic singlet 1,2-dioxetanedione with the C2v symmetry, which verifies the experimental suggestion yet provides more characterization information. Another two kinds of minimum species in its potential energy surface (PES) are two kinds of product: (1) two carbon dioxide and (2) two carbon monoxide and one oxygen, where the thermodynamic parameters correctly identify their relative yield in the experiment—the former is much more abundant than the latter. In a complete search of minimum states in its PES, the triplet C2v and D2h states were found, which is energetically unfavorable compared with the singlet C2v state. Their vibrational data also support some experimental conclusions of ruling out a radical intermediate. In contrast, the singlet D2h state was found to be a transition state for the "up" and "down" singlet C2v states. The complete active space self-consistent-field calculations with the second-order Möller-Plesset correlation energy correction also support that the most stable species is the singlet C2v state and the singlet D2h state is more energetically favorable than its triplet counterpart.
Kitamura, Hikaru
2013-02-13
Photoabsorption cross-sections of simple metals are formulated through a solid-state band theory based on the orthogonalized-plane-wave (OPW) method in Slater's local-exchange approximation, where interband transitions of core and conduction electrons are evaluated up to the soft x-ray regime by using large basis sets. The photoabsorption cross-sections of a sodium crystal are computed for a wide photon energy range from 3 to 1800 eV. It is found that the numerical results reproduce the existing x-ray databases fairly well for energies above the L(2,3)-edge (31 eV), verifying a consistency between solid-state and atomic models for inner-shell photoabsorption; additional oscillatory structures in the present spectra manifest solid-state effects. Our computed results in the vacuum ultraviolet regime (6-30 eV) are also in better agreement with experimental data compared to earlier theories, although some discrepancies remain in the range of 20-30 eV. The influence of the core eigenvalues on the absorption spectra is examined. PMID:23334229
Valeev, Edward F; Janssen, Curtis L
2004-07-15
Ab initio electronic structure approaches in which electron correlation explicitly appears have been the subject of much recent interest. Because these methods accelerate the rate of convergence of the energy and properties with respect to the size of the one-particle basis set, they promise to make accuracies of better than 1 kcal/mol computationally feasible for larger chemical systems than can be treated at present with such accuracy. The linear R12 methods of Kutzelnigg and co-workers are currently the most practical means to include explicit electron correlation. However, the application of such methods to systems of chemical interest faces severe challenges, most importantly, the still steep computational cost of such methods. Here we describe an implementation of the second-order Møller-Plesset method with terms linear in the interelectronic distances (MP2-R12) which has a reduced computational cost due to the use of two basis sets. The use of two basis sets in MP2-R12 theory was first investigated recently by Klopper and Samson and is known as the auxiliary basis set (ABS) approach. One of the basis sets is used to describe the orbitals and another, the auxiliary basis set, is used for approximating matrix elements occurring in the exact MP2-R12 theory. We further extend the applicability of the approach by parallelizing all steps of the integral-direct MP2-R12 energy algorithm. We discuss several variants of the MP2-R12 method in the context of parallel execution and demonstrate that our implementation runs efficiently on a variety of distributed memory machines. Results of preliminary applications indicate that the two-basis (ABS) MP2-R12 approach cannot be used safely when small basis sets (such as augmented double- and triple-zeta correlation consistent basis sets) are utilized in the orbital expansion. Our results suggest that basis set reoptimization or further modifications of the explicitly correlated ansatz and/or standard approximations for
Lisp on a reduced-instruction-set processor: Characterization and optimization
Steenkiste, P.; Hennessy, J.
1988-07-01
When designing a machine for a high-level language (HLL), the authors must decide how to divide the required functionality between hardware and software. The two current approaches use HLL-specific machines and reduced-instruction-set computing (RISC). In determining which approach to use to implement Lisp, the authors faced several questions. To answer these questions, their strategy was to first collect data on what Lisp operations are time-critical using a set of 10 programs. The authors then used the same set of programs to evaluate and compare software optimizations and hardware support for these important operations. They used MIPS-X as a typical example of a RISC processor, but their approach and most of their results are applicable to other architectures and languages. At the end of the article the authors compare the performance of MIPS-X for Lisp with other systems using the Gabriel benchmark set.
NASA Astrophysics Data System (ADS)
Brandbyge, Mads
2014-05-01
In a recent paper Reuter and Harrison [J. Chem. Phys. 139, 114104 (2013)] question the widely used mean-field electron transport theories, which employ nonorthogonal localized basis sets. They claim these can violate an "implicit decoupling assumption," leading to wrong results for the current, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent of whether or not the chosen basis set is nonorthogonal, and that the current for a given basis set is consistent with divisions in real space. The ambiguity known from charge population analysis for nonorthogonal bases does not carry over to calculations of charge flux.
Paschoal, Diego; Marcial, Bruna L; Lopes, Juliana Fedoce; De Almeida, Wagner B; Dos Santos, Hélio F
2012-11-01
In this article, we conducted an extensive ab initio study on the importance of the level of theory and the basis set for theoretical predictions of the structure and reactivity of cisplatin [cis-diamminedichloroplatinum(II) (cDDP)]. Initially, the role of the basis set for the Pt atom was assessed using 24 different basis sets, including three all-electron basis sets (ABS). In addition, a modified all-electron double zeta polarized basis set (mDZP) was proposed by adding a set of diffuse d functions onto the existing DZP basis set. The energy barrier and the rate constant for the first chloride/water exchange ligand process, namely, the aquation reaction, were taken as benchmarks for which reliable experimental data are available. At the B3LYP/mDZP/6-31+G(d) level (the first basis set is for Pt and the last set is for all of the light atoms), the energy barrier was 22.8 kcal mol(-1), which is in agreement with the average experimental value, 22.9 ± 0.4 kcal mol(-1). For the other accessible ABS (DZP and ADZP), the corresponding values were 15.4 and 24.5 kcal mol(-1), respectively. The ADZP and mDZP are notably similar, raising the importance of diffuse d functions for the prediction of the kinetic properties of cDDP. In this article, we also analyze the ligand basis set and the level of theory effects by considering 36 basis sets at distinct levels of theory, namely, Hartree-Fock, MP2, and several DFT functionals. From a survey of the data, we recommend the mPW1PW91/mDZP/6-31+G(d) or B3PW91/mDZP/6-31+G(d) levels to describe the structure and reactivity of cDDP and its small derivatives. Conversely, for large molecules containing a cisplatin motif (for example, the cDDP-DNA complex), the lower levels B3LYP/LANL2DZ/6-31+G(d) and B3LYP/SBKJC-VDZ/6-31+G(d) are suggested. At these levels of theory, the predicted energy barrier was 26.0 and 25.9 kcal mol(-1), respectively, which is only 13% higher than the actual value.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
A candidate-set-free algorithm for generating D-optimal split-plot designs
Jones, Bradley; Goos, Peter
2007-01-01
We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches. PMID:21197132
A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks
NASA Astrophysics Data System (ADS)
De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio
2016-05-01
This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.
NASA Astrophysics Data System (ADS)
Rastak, Narges; Riipinen, Ilona; Pandis, Spyros
2015-04-01
INTRODUCTION Organic aerosol particles often consist of thousands of compounds with different properties. One of these properties is solubility, which affects the hygroscopic growth and CCN activation of the organic particles. Here we investigate the CCN activation behavior of complex organic aerosols accounting for the distribution of solubilities present in these mixtures. METHODS We considered a monodisperse population of spherical aerosol particles consisting of an internal mixture of organic compounds. When exposed to water vapor, these particles were assumed to grow reaching a thermodynamic equilibrium between the water vapor and the particle phase. The composition of the organic and aqueous phases was determined on one hand by the equilibrium between the aqueous phase and water vapor, and on the other hand by the equilibrium of the aqueous phase with the organic insoluble phase. We modelled the mixtures with the help of a solubility basis set (SBS, analogous to the volatility basis set VBS, Donahue et al. 2006, 2011, 2012), describing the mixture with n surrogate compounds with varying solubilities. We varied the range and shape of the solubility distribution, and the number of components n in the distribution, we also assumed two different kinds of interactions between the organic compounds in the insoluble phase 1) ideal mixture, where organics limit each other's dissolution; 2) unity activity, where organics behave as pure compounds and do not influence each other's dissolution. Critical supersaturations and the dissolution behavior at the point of CCN activation were calculated utilizing the Köhler theory for all organic mixtures (denoted here as the "full model"). The full model predictions were compared with the three simplified models: 1) assuming complete dissolution of all compounds; 2) treating the organic mixture solubility with the hygroscopicity parameter κ and 3) assuming a fixed soluble fraction ɛ for each mixture. RESULTS AND CONCLUSIONS
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2015-01-01
Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.
The route to MBxNyCz molecular wheels: II. Results using accurate functionals and basis sets
NASA Astrophysics Data System (ADS)
Güthler, A.; Mukhopadhyay, S.; Pandey, R.; Boustani, I.
2014-04-01
Applying ab initio quantum chemical methods, molecular wheels composed of metal and light atoms were investigated. High quality basis sets 6-31G*, TZPV, and cc-pVTZ as well as exchange and non-local correlation functionals B3LYP, BP86 and B3P86 were used. The ground-state energy and structures of cyclic planar and pyramidal clusters TiBn (for n = 3-10) were computed. In addition, the relative stability and electronic structures of molecular wheels TiBxNyCz (for x, y, z = 0-10) and MBnC10-n (for n = 2 to 5 and M = Sc to Zn) were determined. This paper sustains a follow-up study to the previous one of Boustani and Pandey [Solid State Sci. 14 (2012) 1591], in which the calculations were carried out at the HF-SCF/STO3G/6-31G level of theory to determine the initial stability and properties. The results show that there is a competition between the 2D planar and the 3D pyramidal TiBn clusters (for n = 3-8). Different isomers of TiB10 clusters were also studied and a structural transition of 3D-isomer into 2D-wheel is presented. Substitution boron in TiB10 by carbon or/and nitrogen atoms enhances the stability and leads toward the most stable wheel TiB3C7. Furthermore, the computations show that Sc, Ti and V at the center of the molecular wheels are energetically favored over other transition metal atoms of the first row.
Berski, Slawomir; Latajka, Zdzislaw; Gordon, Agnieszka J
2010-11-15
The article focus on the isomerization of nitrous acid HONO to hydrogen nitryl HNO(2). Density functional (B3LYP) and MP2 methods, and a wide variety of basis sets, have been chosen to investigate the mechanism of this reaction. The results clearly show that there are two possible paths: 1) Uncatalysed isomerisation, trans-HONO --> HNO(2), involving 1,2-hydrogen shift and characterized by a large energetic barrier 49.7 divided by 58.9 kcal/mol, 2) Catalysed double hydrogen transfer process, trans-HONO + cis-HONO --> HNO(2) + cis-HONO, which displays a significantly lower energetic barrier in a range of 11.6 divided by 18.9 kcal/mol. Topological analysis of the Electron Localization Function (ELF) shows that the hydrogen transfer for both studied reactions takes place through the formation of a 'dressed' proton along the reaction path. Use of a wide variety of basis sets demonstrates a clear basis set dependence on the ELF topology of HNO(2). Less saturated basis sets yield two lone pair basins, V(1)(N), V(2)(N), whereas more saturated ones (for example aug-cc-pVTZ and aug-cc-pVQZ) do not indicate a lone pair on the nitrogen atom. Topological analysis of the Electron Localizability Indication (ELI-D) at the CASSCF (12,10) confirms these findings, showing the existence of the lone pair basins but with decreasing populations as the basis set becomes more saturated (0.35e for the cc-pVDZ basis set to 0.06e for the aug-cc-pVTZ). This confirms that the choice of basis set not only can influence the value of the electron population at the particular atom, but can also lead to different ELF topology.
Ant colony optimization analysis on overall stability of high arch dam basis of field monitoring.
Lin, Peng; Liu, Xiaoli; Chen, Hong-Xin; Kim, Jinxie
2014-01-01
A dam ant colony optimization (D-ACO) analysis of the overall stability of high arch dams on complicated foundations is presented in this paper. A modified ant colony optimization (ACO) model is proposed for obtaining dam concrete and rock mechanical parameters. A typical dam parameter feedback problem is proposed for nonlinear back-analysis numerical model based on field monitoring deformation and ACO. The basic principle of the proposed model is the establishment of the objective function of optimizing real concrete and rock mechanical parameter. The feedback analysis is then implemented with a modified ant colony algorithm. The algorithm performance is satisfactory, and the accuracy is verified. The m groups of feedback parameters, used to run a nonlinear FEM code, and the displacement and stress distribution are discussed. A feedback analysis of the deformation of the Lijiaxia arch dam and based on the modified ant colony optimization method is also conducted. By considering various material parameters obtained using different analysis methods, comparative analyses were conducted on dam displacements, stress distribution characteristics, and overall dam stability. The comparison results show that the proposal model can effectively solve for feedback multiple parameters of dam concrete and rock material and basically satisfy assessment requirements for geotechnical structural engineering discipline.
Ant colony optimization analysis on overall stability of high arch dam basis of field monitoring.
Lin, Peng; Liu, Xiaoli; Chen, Hong-Xin; Kim, Jinxie
2014-01-01
A dam ant colony optimization (D-ACO) analysis of the overall stability of high arch dams on complicated foundations is presented in this paper. A modified ant colony optimization (ACO) model is proposed for obtaining dam concrete and rock mechanical parameters. A typical dam parameter feedback problem is proposed for nonlinear back-analysis numerical model based on field monitoring deformation and ACO. The basic principle of the proposed model is the establishment of the objective function of optimizing real concrete and rock mechanical parameter. The feedback analysis is then implemented with a modified ant colony algorithm. The algorithm performance is satisfactory, and the accuracy is verified. The m groups of feedback parameters, used to run a nonlinear FEM code, and the displacement and stress distribution are discussed. A feedback analysis of the deformation of the Lijiaxia arch dam and based on the modified ant colony optimization method is also conducted. By considering various material parameters obtained using different analysis methods, comparative analyses were conducted on dam displacements, stress distribution characteristics, and overall dam stability. The comparison results show that the proposal model can effectively solve for feedback multiple parameters of dam concrete and rock material and basically satisfy assessment requirements for geotechnical structural engineering discipline. PMID:25025089
Ant Colony Optimization Analysis on Overall Stability of High Arch Dam Basis of Field Monitoring
Liu, Xiaoli; Chen, Hong-Xin; Kim, Jinxie
2014-01-01
A dam ant colony optimization (D-ACO) analysis of the overall stability of high arch dams on complicated foundations is presented in this paper. A modified ant colony optimization (ACO) model is proposed for obtaining dam concrete and rock mechanical parameters. A typical dam parameter feedback problem is proposed for nonlinear back-analysis numerical model based on field monitoring deformation and ACO. The basic principle of the proposed model is the establishment of the objective function of optimizing real concrete and rock mechanical parameter. The feedback analysis is then implemented with a modified ant colony algorithm. The algorithm performance is satisfactory, and the accuracy is verified. The m groups of feedback parameters, used to run a nonlinear FEM code, and the displacement and stress distribution are discussed. A feedback analysis of the deformation of the Lijiaxia arch dam and based on the modified ant colony optimization method is also conducted. By considering various material parameters obtained using different analysis methods, comparative analyses were conducted on dam displacements, stress distribution characteristics, and overall dam stability. The comparison results show that the proposal model can effectively solve for feedback multiple parameters of dam concrete and rock material and basically satisfy assessment requirements for geotechnical structural engineering discipline. PMID:25025089
NASA Technical Reports Server (NTRS)
Clancy, Daniel J.; Oezguener, Uemit; Graham, Ronald E.
1994-01-01
The potential for excessive plume impingement loads on Space Station Freedom solar arrays, caused by jet firings from an approaching Space Shuttle, is addressed. An artificial neural network is designed to determine commanded solar array beta gimbal angle for minimum plume loads. The commanded angle would be determined dynamically. The network design proposed involves radial basis functions as activation functions. Design, development, and simulation of this network design are discussed.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used. PMID:26658308
Code of Federal Regulations, 2010 CFR
2010-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule...
Code of Federal Regulations, 2014 CFR
2014-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE...) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a...
Code of Federal Regulations, 2013 CFR
2013-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE...) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a...
Code of Federal Regulations, 2011 CFR
2011-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule...
Code of Federal Regulations, 2012 CFR
2012-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE...) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a...
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
Echenique, Pablo; Alonso, José Luis
2008-07-15
We present an exhaustive study of more than 250 ab initio potential energy surfaces (PESs) of the model dipeptide HCO-L-Ala-NH(2). The model chemistries (MCs) investigated are constructed as homo- and heterolevels involving possibly different RHF and MP2 calculations for the geometry and the energy. The basis sets used belong to a sample of 39 representants from Pople's split-valence families, ranging from the small 3-21G to the large 6-311++G(2df,2pd). The reference PES to which the rest are compared is the MP2/6-311++G(2df,2pd) homolevel, which, as far as we are aware, is the most accurate PES in the literature. All data sets have been analyzed according to a general framework, which can be extended to other complex problems and which captures the nearness concept in the space of MCs. The great number of MCs evaluated has allowed us to significantly explore this space and show that the correlation between accuracy and computational cost of the methods is imperfect, thus justifying a systematic search for the combination of features in a MC that is optimal to deal with peptides. Regarding the particular MCs studied, the most important conclusion is that the potentially very cost-saving heterolevel approximation is a very efficient one to describe the whole PES of HCO-L-Ala-NH(2). Finally, we show that, although RHF may be used to calculate the geometry if a MP2 single-point energy calculation follows, pure RHF//RHF homolevels are not recommendable for this problem.
NASA Astrophysics Data System (ADS)
Yang, Lili; Wang, Yiming; Dong, Qiaoxue
Quantification of tomato's fruit-sets depends on the level of competition for assimilate in different environment, and this paper presented some results of fruit yield and quality (fruit size) in response to environment (mainly respect to and planting-density and light). Some experiments had been carried out to find the relationship between growth rules of tomato and plant densities A structural-functional model GREENLAB has been developed to simulate it. The results show that increasing plant density results in an increment of biomass production on a ground area but in a reduction of single plant fresh weight. To find rules between organ sink and source relationship, calibrations Environmental conditions were introduced into the model checking the influence on Q/D over plant growth period and fruit set ratio. It is found that changing the Q/D ratio in some critical periods can be used to optimize fruit set and yield of greenhouse tomato.
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
NASA Astrophysics Data System (ADS)
Lowe, D.; Topping, D. O.; Archer-Nicholls, S.; Darbyshire, E.; Morgan, W.; Liu, D.; Allan, J. D.; Coe, H.; McFiggans, G.
2015-12-01
The burning of forests in the Amazonia region is a globally significant source of carbonaceous aerosol, containing both absorbing and scattering components [1]. In addition biomass burning aerosol (BBA) are also efficient cloud condensation nuclei (CCN), modifying cloud properties and influencing atmospheric circulation and precipitation tendencies [2]. The impacts of BBA are highly dependent on their size distribution and composition. A bottom-up emissions inventory, the Brazilian Biomass Burning Emissions Model (3BEM) [3], utilising satellite products to generate daily fire emission maps is used. Injection of flaming emissions within the atmospheric column is simulated using both a sub-grid plume-rise parameterisation [4], and simpler schemes, within the Weather Research and Forecasting Model with Chemistry (WRF-Chem, v3.4.1) [5]. Aerosol dynamics are simulated using the sectional MOSAIC scheme [6], incorporating a volatility basis set (VBS) treatment of organic aerosol [7]. For this work we have modified the 9-bin VBS to use the biomass burning specific scheme developed by May et al. [8]. The model has been run for September 2012 over South America (at a 25km resolution). We will present model results evaluating the modelled aerosol vertical distribution, size distribution, and composition against measurements taken by the FAAM BAe-146 research aircraft during the SAMBBA campaign. The main focus will be on investigating the factors controlling the vertical gradient of the organic mass to black carbon ratio of the measured aerosol. This work is supported by the Nature Environment Research Council (NERC) as part of the SAMBBA project under grant NE/J010073/1. [1] D. G. Streets et al., 2004, J. Geophys. Res., 109, D24212. [2] M. O. Andreae et al., 2004, Science, 303, 1337-1342. [3] K. Longo et al., 2010, Atmos. Chem. Phys., 10, 5,785-5,795. [4] S. Freitas et al., 2007, Atmos. Chem. Phys., 7, 3,385-3,398. [5] S. Archer-Nicholls et al., 2015, Geosci. Model Dev., 8
Shrivastava, ManishKumar B.; Fast, Jerome D.; Easter, Richard C.; Gustafson, William I.; Zaveri, Rahul A.; Jimenez, Jose L.; Saide, Pablo; Hodzic, Alma
2011-07-13
The Weather Research and Forecasting model coupled with chemistry (WRF-Chem) is modified to include a volatility basis set (VBS) treatment of secondary organic aerosol formation. The VBS approach, coupled with SAPRC-99 gas-phase chemistry mechanism, is used to model gas-particle partitioning and multiple generations of gas-phase oxidation of organic vapors. In addition to the detailed 9-species VBS, a simplified mechanism using 2 volatility species (2-species VBS) is developed and tested for similarity to the 9-species VBS in terms of both mass and oxygen-to-carbon ratios of organic aerosols in the atmosphere. WRF-Chem results are evaluated against field measurements of organic aerosols collected during the MILAGRO 2006 campaign in the vicinity of Mexico City. The simplified 2-species mechanism reduces the computational cost by a factor of 2 as compared to 9-species VBS. Both ground site and aircraft measurements suggest that the 9-species and 2-species VBS predictions of total organic aerosol mass as well as individual organic aerosol components including primary, secondary, and biomass burning are comparable in magnitude. In addition, oxygen-to-carbon ratio predictions from both approaches agree within 25%, providing evidence that the 2-species VBS is well suited to represent the complex evolution of organic aerosols. Model sensitivity to amount of anthropogenic semi-volatile and intermediate volatility (S/IVOC) precursor emissions is also examined by doubling the default emissions. Both the emission cases significantly under-predict primary organic aerosols in the city center and along aircraft flight transects. Secondary organic aerosols are predicted reasonably well along flight tracks surrounding the city, but are consistently over-predicted downwind of the city. Also, oxygen-to-carbon ratio predictions are significantly improved compared to prior studies by adding 15% oxygen mass per generation of oxidation; however, all modeling cases still under
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-10-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-09-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent
NASA Technical Reports Server (NTRS)
Seldner, K.
1977-01-01
An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.
Mazziotti, David A
2007-05-14
Two-electron reduced density matrices (2-RDMs) have recently been directly determined from the solution of the anti-Hermitian contracted Schrodinger equation (ACSE) to obtain 95%-100% of the ground-state correlation energy of atoms and molecules, which significantly improves upon the accuracy of the contracted Schrodinger equation (CSE) [D. A. Mazziotti, Phys. Rev. Lett. 97, 143002 (2006)]. Two subsets of the CSE, the ACSE and the contraction of the CSE onto the one-particle space, known as the 1,3-CSE, have two important properties: (i) dependence upon only the 3-RDM and (ii) inclusion of all second-order terms when the 3-RDM is reconstructed as only a first-order functional of the 2-RDM. The error in the 1,3-CSE has an important role as a stopping criterion in solving the ACSE for the 2-RDM. Using a computationally more efficient implementation of the ACSE, the author treats a variety of molecules, including H2O, NH3, HCN, and HO3-, in larger basis sets such as correlation-consistent polarized double- and triple-zeta. The ground-state energy of neon is also calculated in a polarized quadruple-zeta basis set with extrapolation to the complete basis-set limit, and the equilibrium bond length and harmonic frequency of N2 are computed with comparison to experimental values. The author observes that increasing the basis set enhances the ability of the ACSE to capture correlation effects in ground-state energies and properties. In the triple-zeta basis set, for example, the ACSE yields energies and properties that are closer in accuracy to coupled cluster with single, double, and triple excitations than to coupled cluster with single and double excitations. In all basis sets, the computed 2-RDMs very closely satisfy known N-representability conditions.
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Yue, Chen
2015-11-01
The welded joints of dissimilar materials have been widely used in automotive, ship and space industries. The joint quality is often evaluated by weld seam geometry, microstructures and mechanical properties. To obtain the desired weld seam geometry and improve the quality of welded joints, this paper proposes a process modeling and parameter optimization method to obtain the weld seam with minimum width and desired depth of penetration for laser butt welding of dissimilar materials. During the process, Taguchi experiments are conducted on the laser welding of the low carbon steel (Q235) and stainless steel (SUS301L-HT). The experimental results are used to develop the radial basis function neural network model, and the process parameters are optimized by genetic algorithm. The proposed method is validated by a confirmation experiment. Simultaneously, the microstructures and mechanical properties of the weld seam generated from optimal process parameters are further studied by optical microscopy and tensile strength test. Compared with the unoptimized weld seam, the welding defects are eliminated in the optimized weld seam and the mechanical properties are improved. The results show that the proposed method is effective and reliable for improving the quality of welded joints in practical production.
Optimized tumor cryptic peptides: the basis for universal neo-antigen-like tumor vaccines.
Menez-Jamet, Jeanne; Gallou, Catherine; Rougeot, Aude; Kosmatopoulos, Kostas
2016-07-01
The very impressive clinical results recently obtained in cancer patients treated with immune response checkpoint inhibitors boosted the interest in immunotherapy as a therapeutic choice in cancer treatment. However, these inhibitors require a pre-existing tumor specific immune response and the presence of tumor infiltrating T cells to be efficient. This immune response can be triggered by cancer vaccines. One of the main issues in tumor vaccination is the choice of the right antigen to target. All vaccines tested to date targeted tumor associated antigens (TAA) that are self-antigens and failed to show a clinical efficacy because of the immune self-tolerance to TAA. A new class of tumor antigens has recently been described, the neo-antigens that are created by point mutations of tumor expressing proteins and are recognized by the immune system as non-self. Neo-antigens exhibit two main properties: they are not involved in the immune self-tolerance process and are immunogenic. However, the majority of the neo-antigens are patient specific and their use as cancer vaccines requires their previous identification in each patient individualy that can be done only in highly specialized research centers. It is therefore evident that neo-antigens cannot be used for patient vaccination worldwide. This raises the question of whether we can find neo-antigen like vaccines, which would not be patient specific. In this review we show that optimized cryptic peptides from TAA are neo-antigen like peptides. Optimized cryptic peptides are recognized by the immune system as non-self because they target self-cryptic peptides that escape self-tolerance; in addition they are strongly immunogenic because their sequence is modified in order to enhance their affinity for the HLA molecule. The first vaccine based on the optimized cryptic peptide approach, Vx-001, which targets the widely expressed tumor antigen telomerase reverse transcriptase (TERT), has completed a large phase I clinical
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade
2013-10-15
Accurate relativistic adapted Gaussian basis sets (RAGBSs) for 87 Fr up to 118 Uuo atoms without variational prolapse were developed here with the use of a polynomial version of the Generator Coordinate Dirac-Fock method. Two finite nuclear models have been used, the Gaussian and uniform sphere models. The largest RAGBS error, with respect to numerical Dirac-Fock results, is 15.4 miliHartree for Ununoctium with a basis set size of 33s30p19d14f functions. PMID:23913741
Reuter, Matthew G; Harrison, Robert J
2014-05-01
The thesis of Brandbyge's comment [J. Chem. Phys. 140, 177103 (2014)] is that our operator decoupling condition is immaterial to transport theories, and it appeals to discussions of nonorthogonal basis sets in transport calculations in its arguments. We maintain that the operator condition is to be preferred over the usual matrix conditions and subsequently detail problems in the existing approaches. From this operator perspective, we conclude that nonorthogonal projectors cannot be used and that the projectors must be selected to satisfy the operator decoupling condition. Because these conclusions pertain to operators, the choice of basis set is not germane.
Klammer, Martin; Dybowski, J. Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2015-01-01
Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature — integrin β4 (ITGB4) — was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance. PMID:26083411
NASA Technical Reports Server (NTRS)
Decker, Arthur J. (Inventor)
2006-01-01
An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.
Automatic classification of Polish fricatives on the basis of optimization of the parameter space
NASA Astrophysics Data System (ADS)
Domagala, Piotr; Richter, Lutoslawa
1994-08-01
The subject of this article is a study of the possibility of automatic classification and recognition of fricatives on the basis of a certain linear combination of values of an autocorrelation function. Automatic classification of fricative consonants constituted one phase of a study which involves the classification of all Polish phones for the purpose of enabling automatic speech recognition regardless of the phonetic context or the speaker. The study was conducted using phonetic material consisting of 166 nonsense syllables which included fricatives and had CVCV and VCCV structures. There were a total of about 20 contexts for each consonant. Separate classifications were performed for 5 female voices and 5 male voices and then both groups of voices were classified together. The three series of classifications had success rates of 60%, 69%, and 60% respectively. These results were about 10% better than the results obtained using classical discrimination analysis (CSS:Statistica 3.1 software). The use of cluster analysis and multidimensional scaling yielded information on the relative probabilities of the acoustic patterns of these phones in reference to perception tests.
Topology optimized design of carpet cloaks based on a level set approach
NASA Astrophysics Data System (ADS)
Fujii, Garuda; Nakamura, Masayuki
2015-09-01
This paper presents topology optimized designs of carpet cloaks made of dielectrics modeled by a level set boundary expression. The objective functional, evaluating the performance of the carpet cloaks, is defined as the integrated intensity of the difference between electric field reflected by the flat plane and that controlled by a carpet cloak covering a bump. The dielectric structures of carpet cloak are designed to minimize the objective functional value and, in some cases, the value reach 0.34% of that when a bare bump exists. Dielectric structures of carpet cloaks are expressed by level set functions given on grid points. The function becomes positive in dielectrics, negative in air and zero on air-dielectric interfaces and express air-dielectric interfaces explicitly.
NASA Astrophysics Data System (ADS)
Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.
2013-03-01
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
Cybulski, Hubert; Baranowska-Łączkowska, Angelika; Henriksen, Christian; Fernández, Berta
2014-11-01
By evaluating a representative set of CCSD(T) ground state interaction energies for van der Waals dimers formed by aromatic molecules and the argon atom, we test the performance of the polarized basis sets of Sadlej et al. (J. Comput. Chem. 2005, 26, 145; Collect. Czech. Chem. Commun. 1988, 53, 1995) and the augmented polarization-consistent bases of Jensen (J. Chem. Phys. 2002, 117, 9234) in providing accurate intermolecular potentials for the benzene-, naphthalene-, and anthracene-argon complexes. The basis sets are extended by addition of midbond functions. As reference we consider CCSD(T) results obtained with Dunning's bases. For the benzene complex a systematic basis set study resulted in the selection of the (Z)Pol-33211 and the aug-pc-1-33321 bases to obtain the intermolecular potential energy surface. The interaction energy values and the shape of the CCSD(T)/(Z)Pol-33211 calculated potential are very close to the best available CCSD(T)/aug-cc-pVTZ-33211 potential with the former basis set being considerably smaller. The corresponding differences for the CCSD(T)/aug-pc-1-33321 potential are larger. In the case of the naphthalene-argon complex, following a similar study, we selected the (Z)Pol-3322 and aug-pc-1-333221 bases. The potentials show four symmetric absolute minima with energies of -483.2 cm(-1) for the (Z)Pol-3322 and -486.7 cm(-1) for the aug-pc-1-333221 basis set. To further check the performance of the selected basis sets, we evaluate intermolecular bound states of the complexes. The differences between calculated vibrational levels using the CCSD(T)/(Z)Pol-33211 and CCSD(T)/aug-cc-pVTZ-33211 benzene-argon potentials are small and for the lowest energy levels do not exceed 0.70 cm(-1). Such differences are substantially larger for the CCSD(T)/aug-pc-1-33321 calculated potential. For naphthalene-argon, bound state calculations demonstrate that the (Z)Pol-3322 and aug-pc-1-333221 potentials are of similar quality. The results show that these
Optimal power settings of aluminum gallium arsenide lasers in caries inhibition — An in vitro study
Sharma, Sonali; Hegde, Mithra N; Sadananda, Vandana; Mathews, Blessen
2016-01-01
Context: Incipient carious lesions are characterized by subsurface dissolution due to more fluoride ions in the 50-100 microns of the tooth's outer surface. Aims: To determine an optimal power setting for 810 nm aluminum gallium arsenide laser for caries inhibition. Materials and Methods: Fifty-four caries-free extracted teeth were sectioned mesiodistally. The samples were divided into 18 groups for each power setting being evaluated. Each group had six samples. The laser used is 810 nm aluminum gallium arsenide laser with power setting from 0.1 watts to 5 watts. Laser fluorescence based device was used to evaluate the effect of irradiation. Statistical Analysis Used: Paired “t” test, one-way analysis of variance (ANOVA), Tukey's post hoc test, and the Pearson's correlation test. Results: The paired t-test showed that there is minimum divergence from the control for 3.5 watts. Tukey's post hoc test also showed statistically significantly results for 3.5 watts. The Pearson's correlation test showed that there was negative correlation between the watts and irradiation. Conclusions: The power setting that gave statistically significant results was 3.5 watts. PMID:27099427
Wang, Wei; Slepčev, Dejan; Basu, Saurav; Ozolek, John A.
2012-01-01
Transportation-based metrics for comparing images have long been applied to analyze images, especially where one can interpret the pixel intensities (or derived quantities) as a distribution of ‘mass’ that can be transported without strict geometric constraints. Here we describe a new transportation-based framework for analyzing sets of images. More specifically, we describe a new transportation-related distance between pairs of images, which we denote as linear optimal transportation (LOT). The LOT can be used directly on pixel intensities, and is based on a linearized version of the Kantorovich-Wasserstein metric (an optimal transportation distance, as is the earth mover’s distance). The new framework is especially well suited for computing all pairwise distances for a large database of images efficiently, and thus it can be used for pattern recognition in sets of images. In addition, the new LOT framework also allows for an isometric linear embedding, greatly facilitating the ability to visualize discriminant information in different classes of images. We demonstrate the application of the framework to several tasks such as discriminating nuclear chromatin patterns in cancer cells, decoding differences in facial expressions, galaxy morphologies, as well as sub cellular protein distributions. PMID:23729991
Alam, T.M.
1998-09-01
The influence of changes in the contracted Gaussian basis set used for ab initio calculations of nuclear magnetic resonance (NMR) phosphorous chemical shift anisotropy (CSA) tensors was investigated. The isotropic chemical shitl and chemical shift anisotropy were found to converge with increasing complexity of the basis set at the Hartree-Fock @IF) level. The addition of d polarization function on the phosphorous nucIei was found to have a major impact of the calculated chemical shi~ but diminished with increasing number of polarization fimctions. At least 2 d polarization fimctions are required for accurate calculations of the isotropic phosphorous chemical shift. The introduction of density fictional theory (DFT) techniques through tie use of hybrid B3LYP methods for the calculation of the phosphorous chemical shift tensor resulted in a poorer estimation of the NMR values, even though DFT techniques result in improved energy and force constant calculations. The convergence of the W parametem with increasing basis set complexity was also observed for the DFT calculations, but produced results with consistent large deviations from experiment. The use of a HF 6-31 l++G(242p) basis set represents a good compromise between accuracy of the simulation and the complexity of the calculation for future ab initio calculations of 31P NMR parameters in larger complexes.
Friese, Daniel H.; Törk, Lisa; Hättig, Christof
2014-11-21
We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.
Optimizing baryon acoustic oscillation surveys - II. Curvature, redshifts and external data sets
NASA Astrophysics Data System (ADS)
Parkinson, David; Kunz, Martin; Liddle, Andrew R.; Bassett, Bruce A.; Nichol, Robert C.; Vardanyan, Mihran
2010-02-01
We extend our study of the optimization of large baryon acoustic oscillation (BAO) surveys to return the best constraints on the dark energy, building on Paper I of this series by Parkinson et al. The survey galaxies are assumed to be pre-selected active, star-forming galaxies observed by their line emission with a constant number density across the redshift bin. Star-forming galaxies have a redshift desert in the region 1.6 < z < 2, and so this redshift range was excluded from the analysis. We use the Seo & Eisenstein fitting formula for the accuracies of the BAO measurements, using only the information for the oscillatory part of the power spectrum as distance and expansion rate rulers. We go beyond our earlier analysis by examining the effect of including curvature on the optimal survey configuration and updating the expected `prior' constraints from Planck and the Sloan Digital Sky Survey. We once again find that the optimal survey strategy involves minimizing the exposure time and maximizing the survey area (within the instrumental constraints), and that all time should be spent observing in the low-redshift range (z < 1.6) rather than beyond the redshift desert, z > 2. We find that, when assuming a flat universe, the optimal survey makes measurements in the redshift range 0.1 < z < 0.7, but that including curvature as a nuisance parameter requires us to push the maximum redshift to 1.35, to remove the degeneracy between curvature and evolving dark energy. The inclusion of expected other data sets (such as WiggleZ, the Baryon Oscillation Spectroscopic Survey and a stage III Type Ia supernova survey) removes the necessity of measurements below redshift 0.9, and pushes the maximum redshift up to 1.5. We discuss considerations in determining the best survey strategy in light of uncertainty in the true underlying cosmological model.
Two-stage fan. 4: Performance data for stator setting angle optimization
NASA Technical Reports Server (NTRS)
Burger, G. D.; Keenan, M. J.
1975-01-01
Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.
Albrecht, S; Busch, J; Kloppenburg, M; Metze, F; Tavan, P
2000-12-01
By adding reverse connections from the output layer to the central layer it is shown how a generalized radial basis functions (GRBF) network can self-organize to form a Bayesian classifier, which is also capable of novelty detection. For this purpose, three stochastic sequential learning rules are introduced from biological considerations which pertain to the centers, the shapes, and the widths of the receptive fields of the neurons and allow ajoint optimization of all network parameters. The rules are shown to generate maximum-likelihood estimates of the class-conditional probability density functions of labeled data in terms of multivariate normal mixtures. Upon combination with a hierarchy of deterministic annealing procedures, which implement a multiple-scale approach, the learning process can avoid the convergence problems hampering conventional expectation-maximization algorithms. Using an example from the field of speech recognition, the stages of the learning process and the capabilities of the self-organizing GRBF classifier are illustrated.
Synthetic enzyme mixtures for biomass deconstruction: production and optimization of a core set.
Banerjee, Goutami; Car, Suzana; Scott-Craig, John S; Borrusch, Melissa S; Aslam, Nighat; Walton, Jonathan D
2010-08-01
The high cost of enzymes is a major bottleneck preventing the development of an economically viable lignocellulosic ethanol industry. Commercial enzyme cocktails for the conversion of plant biomass to fermentable sugars are complex mixtures containing more than 80 proteins of suboptimal activities and relative proportions. As a step toward the development of a more efficient enzyme cocktail for biomass conversion, we have developed a platform, called GENPLAT, that uses robotic liquid handling and statistically valid experimental design to analyze synthetic enzyme mixtures. Commercial enzymes (Accellerase 1000 +/- Multifect Xylanase, and Spezyme CP +/- Novozyme 188) were used to test the system and serve as comparative benchmarks. Using ammonia-fiber expansion (AFEX) pretreated corn stover ground to 0.5 mm and a glucan loading of 0.2%, an enzyme loading of 15 mg protein/g glucan, and 48 h digestion at 50 degrees C, commercial enzymes released 53% and 41% of the available glucose and xylose, respectively. Mixtures of three, five, and six pure enzymes of Trichoderma species, expressed in Pichia pastoris, were systematically optimized. Statistical models were developed for the optimization of glucose alone, xylose alone, and the average of glucose + xylose for two digestion durations, 24 and 48 h. The resulting models were statistically significant (P < 0.0001) and indicated an optimum composition for glucose release (values for optimized xylose release are in parentheses) of 29% (5%) cellobiohydrolase 1, 5% (14%) cellobiohydrolase 2, 25% (25%) endo-beta1,4-glucanase 1, 14% (5%) beta-glucosidase, 22% (34%) endo-beta1,4-xylanase 3, and 5% (17%) beta-xylosidase in 48 h at a protein loading of 15 mg/g glucan. Comparison of two AFEX-treated corn stover preparations ground to different particle sizes indicated that particle size (100 vs. 500 microm) makes a large difference in total digestibility. The assay platform and the optimized "core" set together provide a starting
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
NASA Technical Reports Server (NTRS)
Coppin, Pol R.; Bauer, Marvin E.
1992-01-01
Procedures that were developed to optimize the information content of multitemporal thematic mapper (TM) data sets for forest cover disturbance monitoring in Minnesota are described. TM imagery from three different years was calibrated to exoatmospheric reflectance. An atmospheric correction routine was applied combining two major components, atmospheric normalization over time and transformation to ground reflectance. Atmospheric conditions were modeled over time using regression functions derived from five ground features known to be unchanged over the time interval of interest and spanning the entire image reflectance range. The correlation between digital data and the forest cover was subsequently maximized and irrelevant information content was reduced by converting the band-specific reflectances into seven vegetation indices that were assumed to carry unique information. The application of two change detection algorithms to these seven indices ultimately resulted in 14 change features for each time interval of interest. Results show that the preprocessing sequence is vital to forest cover monitoring methodology.
He, Lian; Chen, Amelia B; Yu, Yi; Kucera, Leah; Tang, Yinjie
2013-01-01
Flue gas from power plants can promote algal cultivation and reduce greenhouse gas emissions(1). Microalgae not only capture solar energy more efficiently than plants(3), but also synthesize advanced biofuels(2-4). Generally, atmospheric CO2 is not a sufficient source for supporting maximal algal growth(5). On the other hand, the high concentrations of CO2 in industrial exhaust gases have adverse effects on algal physiology. Consequently, both cultivation conditions (such as nutrients and light) and the control of the flue gas flow into the photo-bioreactors are important to develop an efficient "flue gas to algae" system. Researchers have proposed different photobioreactor configurations(4,6) and cultivation strategies(7,8) with flue gas. Here, we present a protocol that demonstrates how to use models to predict the microalgal growth in response to flue gas settings. We perform both experimental illustration and model simulations to determine the favorable conditions for algal growth with flue gas. We develop a Monod-based model coupled with mass transfer and light intensity equations to simulate the microalgal growth in a homogenous photo-bioreactor. The model simulation compares algal growth and flue gas consumptions under different flue-gas settings. The model illustrates: 1) how algal growth is influenced by different volumetric mass transfer coefficients of CO2; 2) how we can find optimal CO2 concentration for algal growth via the dynamic optimization approach (DOA); 3) how we can design a rectangular on-off flue gas pulse to promote algal biomass growth and to reduce the usage of flue gas. On the experimental side, we present a protocol for growing Chlorella under the flue gas (generated by natural gas combustion). The experimental results qualitatively validate the model predictions that the high frequency flue gas pulses can significantly improve algal cultivation. PMID:24121788
He, Lian; Chen, Amelia B; Yu, Yi; Kucera, Leah; Tang, Yinjie
2013-01-01
Flue gas from power plants can promote algal cultivation and reduce greenhouse gas emissions(1). Microalgae not only capture solar energy more efficiently than plants(3), but also synthesize advanced biofuels(2-4). Generally, atmospheric CO2 is not a sufficient source for supporting maximal algal growth(5). On the other hand, the high concentrations of CO2 in industrial exhaust gases have adverse effects on algal physiology. Consequently, both cultivation conditions (such as nutrients and light) and the control of the flue gas flow into the photo-bioreactors are important to develop an efficient "flue gas to algae" system. Researchers have proposed different photobioreactor configurations(4,6) and cultivation strategies(7,8) with flue gas. Here, we present a protocol that demonstrates how to use models to predict the microalgal growth in response to flue gas settings. We perform both experimental illustration and model simulations to determine the favorable conditions for algal growth with flue gas. We develop a Monod-based model coupled with mass transfer and light intensity equations to simulate the microalgal growth in a homogenous photo-bioreactor. The model simulation compares algal growth and flue gas consumptions under different flue-gas settings. The model illustrates: 1) how algal growth is influenced by different volumetric mass transfer coefficients of CO2; 2) how we can find optimal CO2 concentration for algal growth via the dynamic optimization approach (DOA); 3) how we can design a rectangular on-off flue gas pulse to promote algal biomass growth and to reduce the usage of flue gas. On the experimental side, we present a protocol for growing Chlorella under the flue gas (generated by natural gas combustion). The experimental results qualitatively validate the model predictions that the high frequency flue gas pulses can significantly improve algal cultivation.
Optimize Flue Gas Settings to Promote Microalgae Growth in Photobioreactors via Computer Simulations
He, Lian; Chen, Amelia B; Yu, Yi; Kucera, Leah; Tang, Yinjie
2013-01-01
Flue gas from power plants can promote algal cultivation and reduce greenhouse gas emissions1. Microalgae not only capture solar energy more efficiently than plants3, but also synthesize advanced biofuels2-4. Generally, atmospheric CO2 is not a sufficient source for supporting maximal algal growth5. On the other hand, the high concentrations of CO2 in industrial exhaust gases have adverse effects on algal physiology. Consequently, both cultivation conditions (such as nutrients and light) and the control of the flue gas flow into the photo-bioreactors are important to develop an efficient “flue gas to algae” system. Researchers have proposed different photobioreactor configurations4,6 and cultivation strategies7,8 with flue gas. Here, we present a protocol that demonstrates how to use models to predict the microalgal growth in response to flue gas settings. We perform both experimental illustration and model simulations to determine the favorable conditions for algal growth with flue gas. We develop a Monod-based model coupled with mass transfer and light intensity equations to simulate the microalgal growth in a homogenous photo-bioreactor. The model simulation compares algal growth and flue gas consumptions under different flue-gas settings. The model illustrates: 1) how algal growth is influenced by different volumetric mass transfer coefficients of CO2; 2) how we can find optimal CO2 concentration for algal growth via the dynamic optimization approach (DOA); 3) how we can design a rectangular on-off flue gas pulse to promote algal biomass growth and to reduce the usage of flue gas. On the experimental side, we present a protocol for growing Chlorella under the flue gas (generated by natural gas combustion). The experimental results qualitatively validate the model predictions that the high frequency flue gas pulses can significantly improve algal cultivation. PMID:24121788
NASA Astrophysics Data System (ADS)
Yao, Qiang; Takahashi, Keita; Fujii, Toshiaki
2013-03-01
In recent years, ray space (or light field in other literatures) photography has gained a great popularity in the area of computer vision and image processing, and an efficient acquisition of a ray space is of great significance in the practical application. In order to handle the huge data problem in the acquisition process, in this paper, we propose a method of compressively sampling and reconstructing one ray space. In our method, one weighted matrix which reflects the amplitude structure of non-zero coefficients in 2D-DCT domain is designed and generated by using statistics from available data set. The weighted matrix is integrated in ι1 norm optimization to reconstruct the ray space, and we name this method as statistically-weighted ι1 norm optimization. Experimental result shows that the proposed method achieves better reconstruction result at both low (0.1 of original sampling rate) and high (0.5 of original sampling rate) subsampling rates. In addition, the reconstruction time is also reduced by 25% compared to the reconstruction time by plain ι1 norm optimization.
Southall, Stacey M; Wong, Poon-Sheng; Odho, Zain; Roe, S Mark; Wilson, Jon R
2009-01-30
The mixed-lineage leukemia protein MLL1 is a transcriptional regulator with an essential role in early development and hematopoiesis. The biological function of MLL1 is mediated by the histone H3K4 methyltransferase activity of the carboxyl-terminal SET domain. We have determined the crystal structure of the MLL1 SET domain in complex with cofactor product AdoHcy and a histone H3 peptide. This structure indicates that, in order to form a well-ordered active site, a highly variable but essential component of the SET domain must be repositioned. To test this idea, we compared the effect of the addition of MLL complex members on methyltransferase activity and show that both RbBP5 and Ash2L but not Wdr5 stimulate activity. Additionally, we have determined the effect of posttranslational modifications on histone H3 residues downstream and upstream from the target lysine and provide a structural explanation for why H3T3 phosphorylation and H3K9 acetylation regulate activity. PMID:19187761
Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings
Moore, Sean M.; Lessler, Justin
2015-01-01
The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations. PMID:26423441
Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings.
Moore, Sean M; Lessler, Justin
2015-10-01
The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations.
Nonlocal games and optimal steering at the boundary of the quantum set
NASA Astrophysics Data System (ADS)
Zhen, Yi-Zheng; Goh, Koon Tong; Zheng, Yu-Lin; Cao, Wen-Fei; Wu, Xingyao; Chen, Kai; Scarani, Valerio
2016-08-01
The boundary between classical and quantum correlations is well characterized by linear constraints called Bell inequalities. It is much harder to characterize the boundary of the quantum set itself in the space of no-signaling correlations. For the points on the quantum boundary that violate maximally some Bell inequalities, J. Oppenheim and S. Wehner [Science 330, 1072 (2010), 10.1126/science.1192065] pointed out a complex property: Alice's optimal measurements steer Bob's local state to the eigenstate of an effective operator corresponding to its maximal eigenvalue. This effective operator is the linear combination of Bob's local operators induced by the coefficients of the Bell inequality, and it can be interpreted as defining a fine-grained uncertainty relation. It is natural to ask whether the same property holds for other points on the quantum boundary, using the Bell expression that defines the tangent hyperplane at each point. We prove that this is indeed the case for a large set of points, including some that were believed to provide counterexamples. The price to pay is to acknowledge that the Oppenheim-Wehner criterion does not respect equivalence under the no-signaling constraint: for each point, one has to look for specific forms of writing the Bell expressions.
NASA Astrophysics Data System (ADS)
Mikhalev, A. S.; Rouban, A. I.
2016-04-01
The algorithms of global non-differentiable minimization of functions on set of the mixed variables: continuous and discrete with unordered specific possible values are constructed. The method of optimization is based on selective averaging of required variables, on adaptive reorganization of the sizes of admissible domain of trial movements and on use of relative values for minimised functions. Existence of discrete variables leads to solution of a sequence of global minimization problems of the functions in space of only continuous variables at the presence: 1) of their inequality restrictions for each problem; 2) of the general inequality restrictions for all problems (i.e. at the absence of dependence of functions fore inequality restrictions from discrete variables). In the first case, presence of discrete variables with unordered non-numeric possible values leads to solution of sequence of problems of global minimization of multiextreme functions on set only of continuous variables at the presence of their inequality restrictions. As a result, among the received optimum solutions the best is selected. In the second variant all minimized functions is convoluted in each sampling point in one multiextreme function and this function is minimised on continuous variables.
LISP on a reduced-instruction-set processor: characterization and optimization
Steenkiste, P.A.
1987-01-01
As a result of advances in compiler technology, almost all programs are written in high-level languages, and the effectiveness of a computer architecture is determined by its suitability as a compiler target. This central role of compilers in the use of computers has led computer architects to study the implementation of high-level language programs. This thesis presents measurements for a set of Portable Standard LISP programs that were executed on a reduced-instruction-set processor (MIPS-X), examining what instructions LISP uses at the assembly level, and how much time is spent on the most-common primitive LISP operations. This information makes it possible to determine which operations are time-critical and to evaluate how well architectural features address these operations. Based on these data, three areas for optimization are proposed: the implementation of the tags used for run-time type checking, reducing the cost of procedure calls, and interprocedural register allocation. A number of methods to implement tags, both with and without hardware support, are presented,and the performance of the different implementation strategies is compared.
Sure, Rebecca; Brandenburg, Jan Gerit; Grimme, Stefan
2016-04-01
In quantum chemical computations the combination of Hartree-Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double-zeta quality is still widely used, for example, in the popular B3LYP/6-31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean-field methods. PMID:27308221
Kobus, J.; Moncrieff, D.; Wilson, S.
2000-12-01
A comparison is made of the accuracy with which the electric moments {mu}, {Theta}, {Omega}, and {Phi} can be calculated by using the finite basis set approach (the algebraic approximation) and finite-difference method in calculations employing the Hartree-Fock model for the ground states of 16 diatomic molecules at their experimental equilibrium geometries. Specifically, the 2{sup n}-pole moments n=1,2,3,4, for the N{sub 2}, CO, BF, CN{sup -}, NO{sup +}, BeF, BO, CN, N{sub 2}{sup +}, AlF, GaF, InF, TlF, MgF, CaF, and SrF molecules are determined using basis sets and grids that have been employed in previous studies of the Hartree-Fock energy.
NASA Astrophysics Data System (ADS)
Baranowska-Łączkowska, Angelika; Fernández, Berta
2015-11-01
Interaction-induced electric dipole moment, polarisability and first hyperpolarisability are investigated in model hydrogen-bonded clusters built of hydrogen fluoride molecules organised in three linear chains parallel to each other. The properties are evaluated within the finite field approach, using the second order Møller-Plesset method, and the LPol-m (m = ds, dl) and the optical rotation prediction (ORP) basis sets. These bases and correlation method are selected after a systematic basis set and correlation method convergence study carried out on the smallest of the complexes and taking properties obtained with Dunning's bases and the coupled cluster singles and doubles (CCSD) and the CCSD including connected triple corrections (CCSD(T)) methods as reference. Results are analysed in terms of many-body and cooperative effects.
Sure, Rebecca; Brandenburg, Jan Gerit; Grimme, Stefan
2016-04-01
In quantum chemical computations the combination of Hartree-Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double-zeta quality is still widely used, for example, in the popular B3LYP/6-31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean-field methods.
Sure, Rebecca; Brandenburg, Jan Gerit
2015-01-01
Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221
Vastine, Benjamin Alan; Webster, Charles Edwin; Hall, Michael B
2007-11-01
The reaction mechanism for the cycle beginning with the reductive elimination (RE) of methane from κ(3)-TpPt(IV)(CH3)2H (1) (Tp = hydridotris(pyrazolyl)borate) and subsequent oxidative addition (OA) of benzene to form finally κ(3)-TpPt(IV)(Ph)2H (19) was investigated by density functional theory (DFT). Two mechanistic steps are of particular interest, namely the barrier to C-H coupling (barrier 1 - Ba1) and the barrier to methane release (barrier 2 - Ba2). For 31 density functionals, the calculated values for Ba1 and Ba2 were benchmarked against the experimentally reported values of 26 (Ba1) and 35 (Ba2) kcal·mol(-1), respectively. Specifically, the values for Ba1 and Ba2, calculated at the B3LYP/double-ζ plus polarization level of theory, are 24.6 and 34.3 kcal·mol(-1), respectively. Overall, the best performing functional was BPW91 where the mae associated with the calculated values of the two barriers is 0.68 kcal·mol(-1). The calculated B3LYP values of Ba1 ranged between 20 and 26 kcal·mol(-1) for 12 effective core potential basis sets for platinum and 29 all-electron basis sets for the first row elements. Polarization functions for the first row elements were important for accurate values, but the addition of diffuse functions to non-hydrogen (+) and hydrogen atoms (++) had little effect on the calculated values. Basis set saturation was achieved with APNO basis sets utilized for first-row atoms. Bader's "Atoms in Molecules" was used to analyze the electron density of several complexes, and the electron density at the Pt-Nax bond critical point (trans to the active site for C-H coupling) varied over a wider range than any of the other Pt-N bonds.
Cheng, Lan Stanton, John F.; Gauss, Jürgen
2015-06-14
A systematic relativistic coupled-cluster study is reported on the harmonic vibrational frequencies of the O{sub h}, C{sub 3v}, and C{sub 2v} conformers of XeF{sub 6}, with scalar-relativistic effects efficiently treated using the spin-free exact two-component theory in its one-electron variant (SFX2C-1e). Atomic natural orbital type basis sets recontracted for the SFX2C-1e scheme have been shown to provide rapid basis-set convergence for the vibrational frequencies. SFX2C-1e as well as complementary pseudopotential based computations consistently predicts that both O{sub h} and C{sub 3v} structures are local minima on the potential energy surface, while the C{sub 2v} structure is a transition state. Qualitative disagreement between the present results for the O{sub h} structure and those from CCSD(T)-F12b calculations [Peterson et al., J. Phys. Chem. A 116, 9777 (2012)], which yielded a triply degenerate imaginary frequency for the O{sub h} structure, is attributed here to the high sensitivity of the computed harmonic frequencies of the t{sub 1u} bending modes to the basis-set effects of triples contributions.
NASA Astrophysics Data System (ADS)
Kobus, J.; Moncrieff, D.; Wilson, S.
2001-12-01
A comparison is made of the accuracy by which the electric dipole polarizability αzz and hyperpolarizability βzzz can be calculated by using the finite basis set approach (the algebraic approximation) and finite difference method in calculations employing the Hartree-Fock model. The numerical and algebraic methods were tested on the ground states of H2, LiH, BH and FH molecules at their respective experimental equilibrium geometries. For the FH molecule at its experimental equilibrium geometry, a sequence of distributed universal even-tempered basis sets have been used to explore the convergence pattern of the total energy, dipole moment and polarizabilities. The comparison of finite difference and finite basis set methods is extended to geometries for which the nuclear separation, RFH, lies in the range 1.5-2.2 b. The methods give consistent results to within 1% or better. In the case of the FH molecule the dependence of truncation errors of the total energy, dipole moment and polarizabilities on the geometry have been studied and are shown to be negligible.
Srinivasan, Sriram Goverapet; Goldman, Nir; Tamblyn, Isaac; Hamel, Sebastien; Gaus, Michael
2014-07-24
We present a new DFTB-p3b density functional tight binding model for hydrogen at extremely high pressures and temperatures, which includes a polarizable basis set (p) and a three-body environmentally dependent repulsive potential (3b). We find that use of an extended basis set is necessary under dissociated liquid conditions to account for the substantial p-orbital character of the electronic states around the Fermi energy. The repulsive energy is determined through comparison to cold curve pressures computed from density functional theory (DFT) for the hexagonal close-packed solid, as well as pressures from thermally equilibrated DFT-MD simulations of the liquid phase. In particular, we observe improved agreement in our DFTB-p3b model with previous theoretical and experimental results for the shock Hugoniot of hydrogen up to 100 GPa and 25000 K, compared to a standard DFTB model using pairwise interactions and an s-orbital basis set, only. The DFTB-p3b approach discussed here provides a general method to extend the DFTB method for a wide variety of materials over a significantly larger range of thermodynamic conditions than previously possible. PMID:24960065
Chitwood, Daniel H.; Kumar, Ravi; Headland, Lauren R.; Ranjan, Aashish; Covington, Michael F.; Ichihashi, Yasunori; Fulop, Daniel; Jiménez-Gómez, José M.; Peng, Jie; Maloof, Julin N.; Sinha, Neelima R.
2013-01-01
Introgression lines (ILs), in which genetic material from wild tomato species is introgressed into a domesticated background, have been used extensively in tomato (Solanum lycopersicum) improvement. Here, we genotype an IL population derived from the wild desert tomato Solanum pennellii at ultrahigh density, providing the exact gene content harbored by each line. To take advantage of this information, we determine IL phenotypes for a suite of vegetative traits, ranging from leaf complexity, shape, and size to cellular traits, such as stomatal density and epidermal cell phenotypes. Elliptical Fourier descriptors on leaflet outlines provide a global analysis of highly heritable, intricate aspects of leaf morphology. We also demonstrate constraints between leaflet size and leaf complexity, pavement cell size, and stomatal density and show independent segregation of traits previously assumed to be genetically coregulated. Meta-analysis of previously measured traits in the ILs shows an unexpected relationship between leaf morphology and fruit sugar levels, which RNA-Seq data suggest may be attributable to genetically coregulated changes in fruit morphology or the impact of leaf shape on photosynthesis. Together, our results both improve upon the utility of an important genetic resource and attest to a complex, genetic basis for differences in leaf morphology between natural populations. PMID:23872539
Legler, C R; Brown, N R; Dunbar, R A; Harness, M D; Nguyen, K; Oyewole, O; Collier, W B
2015-06-15
The Scaled Quantum Mechanical (SQM) method of scaling calculated force constants to predict theoretically calculated vibrational frequencies is expanded to include a broad array of polarized and augmented basis sets based on the split valence 6-31G and 6-311G basis sets with the B3LYP density functional. Pulay's original choice of a single polarized 6-31G(d) basis coupled with a B3LYP functional remains the most computationally economical choice for scaled frequency calculations. But it can be improved upon with additional polarization functions and added diffuse functions for complex molecular systems. The new scale factors for the B3LYP density functional and the 6-31G, 6-31G(d), 6-31G(d,p), 6-31G+(d,p), 6-31G++(d,p), 6-311G, 6-311G(d), 6-311G(d,p), 6-311G+(d,p), 6-311G++(d,p), 6-311G(2d,p), 6-311G++(2d,p), 6-311G++(df,p) basis sets are shown. The double d polarized models did not perform as well and the source of the decreased accuracy was investigated. An alternate system of generating internal coordinates that uses the out-of plane wagging coordinate whenever it is possible; makes vibrational assignments via potential energy distributions more meaningful. Automated software to produce SQM scaled vibrational calculations from different molecular orbital packages is presented. PMID:25766474
NASA Astrophysics Data System (ADS)
Legler, C. R.; Brown, N. R.; Dunbar, R. A.; Harness, M. D.; Nguyen, K.; Oyewole, O.; Collier, W. B.
2015-06-01
The Scaled Quantum Mechanical (SQM) method of scaling calculated force constants to predict theoretically calculated vibrational frequencies is expanded to include a broad array of polarized and augmented basis sets based on the split valence 6-31G and 6-311G basis sets with the B3LYP density functional. Pulay's original choice of a single polarized 6-31G(d) basis coupled with a B3LYP functional remains the most computationally economical choice for scaled frequency calculations. But it can be improved upon with additional polarization functions and added diffuse functions for complex molecular systems. The new scale factors for the B3LYP density functional and the 6-31G, 6-31G(d), 6-31G(d,p), 6-31G+(d,p), 6-31G++(d,p), 6-311G, 6-311G(d), 6-311G(d,p), 6-311G+(d,p), 6-311G++(d,p), 6-311G(2d,p), 6-311G++(2d,p), 6-311G++(df,p) basis sets are shown. The double d polarized models did not perform as well and the source of the decreased accuracy was investigated. An alternate system of generating internal coordinates that uses the out-of plane wagging coordinate whenever it is possible; makes vibrational assignments via potential energy distributions more meaningful. Automated software to produce SQM scaled vibrational calculations from different molecular orbital packages is presented.
Defining the optimal animal model for translational research using gene set enrichment analysis.
Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert
2016-01-01
The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. PMID:27311961
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review
NASA Astrophysics Data System (ADS)
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Gour, Jeffrey R.; Lutz, Jesse J.; Włoch, Marta; Piecuch, Piotr; Truhlar, Donald G.
2008-01-01
The CCSD, CCSD(T), and CR-CC(2,3) coupled cluster methods, combined with five triple-zeta basis sets, namely, MG3S, aug-cc-pVTZ, aug-cc-pV(T +d)Z, aug-cc-pCVTZ, and aug-cc-pCV(T +d)Z, are tested against the DBH24 database of diverse reaction barrier heights. The calculations confirm that the inclusion of connected triple excitations is essential to achieving high accuracy for thermochemical kinetics. They show that various noniterative ways of incorporating connected triple excitations in coupled cluster theory, including the CCSD(T) approach, the full CR-CC(2,3) method, and approximate variants of CR-CC(2,3) similar to the triples corrections of the CCSD(2) approaches, are all about equally accurate for describing the effects of connected triply excited clusters in studies of activation barriers. The effect of freezing core electrons on the results of the CCSD, CCSD(T), and CR-CC(2,3) calculations for barrier heights is also examined. It is demonstrated that to include core correlation most reliably, a basis set including functions that correlate the core and that can treat core-valence correlation is required. On the other hand, the frozen-core approximation using valence-optimized basis sets that lead to relatively small computational costs of CCSD(T) and CR-CC(2,3) calculations can achieve almost as high accuracy as the analogous fully correlated calculations.
Fjodorova, Natalja; Novič, Marjana
2015-09-01
Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits. PMID:26388367
Fjodorova, Natalja; Novič, Marjana
2015-09-01
Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits.
Varandas, A. J. C.; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme. Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.
Haiduke, Roberto L A; De Macedo, Luiz G M; Da Silva, Albérico B F
2005-07-15
An accurate relativistic universal Gaussian basis set (RUGBS) from H through No without variational prolapse has been developed by employing the Generator Coordinate Dirac-Fock (GCDF) method. The behavior of our RUGBS was tested with two nuclear models: (1) the finite nucleus of uniform proton-charge distribution, and (2) the finite nucleus with a Gaussian proton-charge distribution. The largest error between our Dirac-Fock-Coulomb total energy values and those calculated numerically is 8.8 mHartree for the No atom.
NASA Astrophysics Data System (ADS)
Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.
2016-11-01
We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.
Haiduke, Roberto L A; De Macedo, Luiz G M; Da Silva, Albérico B F
2005-07-15
An accurate relativistic universal Gaussian basis set (RUGBS) from H through No without variational prolapse has been developed by employing the Generator Coordinate Dirac-Fock (GCDF) method. The behavior of our RUGBS was tested with two nuclear models: (1) the finite nucleus of uniform proton-charge distribution, and (2) the finite nucleus with a Gaussian proton-charge distribution. The largest error between our Dirac-Fock-Coulomb total energy values and those calculated numerically is 8.8 mHartree for the No atom. PMID:15841472
Sloan, Derek J; Corbett, Elizabeth L; Butterworth, Anthony E; Mwandumba, Henry C; Khoo, Saye H; Mdolo, Aaron; Shani, Doris; Kamdolozi, Mercy; Allen, Jenny; Mitchison, Denis A; Coleman, David J; Davies, Geraint R
2012-07-01
Serial Sputum Colony Counting (SSCC) is an important technique in clinical trials of new treatments for tuberculosis (TB). Quantitative cultures on selective Middlebrook agar are used to calculate the rate of bacillary elimination from sputum collected from patients at different time points during the first 2 months of therapy. However, the procedure can be complicated by high sample contamination rates. This study, conducted in a resource-poor setting in Malawi, assessed the ability of different antifungal drugs in selective agar to reduce contamination. Overall, 229 samples were studied and 15% to 27% were contaminated. Fungal organisms were particularly implicated, and samples collected later in treatment were at particular risk (P < 0.001). Amphotericin B (AmB) is the standard antifungal drug used on SSCC plates at a concentration of 10 mg/ml. On selective Middlebrook 7H10 plates, AmB at 30 mg/ml reduced sample contamination by 17% compared with AmB at 10 mg/ml. The relative risk of contamination using AmB at 10 mg/ml was 1.79 (95% confidence interval [CI], 1.25 to 3.55). On Middlebrook 7H11 plates, a combination of AmB at 10 mg/ml and carbendazim at 50 mg/ml was associated with 10% less contamination than AmB at 30 mg/ml. The relative risk of contamination with AmB at 30 mg/ml was 1.79 (95% CI, 1.01 to 3.17). Improved antifungal activity was accompanied by a small reduction in bacillary counts, but this did not affect modeling of bacillary elimination. In conclusion, a combination of AmB and carbendazim optimized the antifungal activity of selective media for growth of TB. We recommend this method to reduce contamination rates and improve SSCC studies in African countries where the burden of TB is highest.
Jagiełło, Władysław; Wójcicki, Zbigniew; Barczyński, Bartłomiej J; Litwiniuk, Artur; Kalina, Roman Maciej
2014-01-01
The aim of this study is the methodology of optimal choice of firefighters to solve difficult rescue tasks. 27 firefighters were analyzed: aged from 22-50 years of age, and with 2-27 years of work experience. Body balance disturbance tolerance skills (BBDTS) measured by the 'Rotational Test' (RT) and time of transition (back and forth) on a 4 meter beam located 3 meters above the ground, was the criterion for simulation of a rescue task (SRT). RT and SRT were carried out first in a sports tracksuit and then in protective clothing. A total of 4 results of the RT and SRT is the substantive base of the 4 rankings. The correlation of the RT and SRT results with 3 criteria for estimating BBDTS and 2 categories ranged from 0.478 (p<0.01) - 0.884 (p<0.01) and the results of SRT 0.911 (p<0.01). The basic ranking very highly correlated indicators of SRT (0.860 and 0.844), while the 6 indicators of RT only 2 (0.396 and 0.381; p<0.05). There was no correlation between the results of the RT and SRT, but there was an important partial correlation of these variables, but only then was the effect stabilized. The Rotational Test is a simple and easy to use tool for measuring body balance disturbance tolerance skills. However, the BBDTS typology is an accurate criteria for forecasting on this basis, including the results of accurate motor simulations, and the periodic ability of firefighters to solve the most difficult rescue tasks. PMID:24738515
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
NASA Astrophysics Data System (ADS)
Balabanov, Nikolai B.; Peterson, Kirk A.
2006-08-01
Recently developed correlation consistent basis sets for the first row transition metal elements Sc-Zn have been utilized to determine complete basis set (CBS) scalar relativistic electron affinities, ionization potentials, and 4s23dn -2-4s1dn -1 electronic excitation energies with single reference coupled cluster methods [CCSD(T), CCSDT, and CCSDTQ] and multireference configuration interaction with three reference spaces: 3d4s, 3d4s4p, and 3d4s4p3d'. The theoretical values calculated with the highest order coupled cluster techniques at the CBS limit, including extrapolations to full configuration interaction, are well within 1kcal/mol of the corresponding experimental data. For the early transition metal elements (Sc-Mn) the internally contracted multireference averaged coupled pair functional method yielded excellent agreement with experiment; however, the atomic properties for the late transition metals (Mn-Zn) proved to be much more difficult to describe with this level of theory, even with the largest reference function of the present work.
NASA Astrophysics Data System (ADS)
Dixit, Anant; Ángyán, János G.; Rocca, Dario
2016-09-01
A new formalism was recently proposed to improve random phase approximation (RPA) correlation energies by including approximate exchange effects [B. Mussard et al., J. Chem. Theory Comput. 12, 2191 (2016)]. Within this framework, by keeping only the electron-hole contributions to the exchange kernel, two approximations can be obtained: An adiabatic connection analog of the second order screened exchange (AC-SOSEX) and an approximate electron-hole time-dependent Hartree-Fock (eh-TDHF). Here we show how this formalism is suitable for an efficient implementation within the plane-wave basis set. The response functions involved in the AC-SOSEX and eh-TDHF equations can indeed be compactly represented by an auxiliary basis set obtained from the diagonalization of an approximate dielectric matrix. Additionally, the explicit calculation of unoccupied states can be avoided by using density functional perturbation theory techniques and the matrix elements of dynamical response functions can be efficiently computed by applying the Lanczos algorithm. As shown by several applications to reaction energies and weakly bound dimers, the inclusion of the electron-hole kernel significantly improves the accuracy of ground-state correlation energies with respect to RPA and semi-local functionals.
Dixit, Anant; Ángyán, János G; Rocca, Dario
2016-09-14
A new formalism was recently proposed to improve random phase approximation (RPA) correlation energies by including approximate exchange effects [B. Mussard et al., J. Chem. Theory Comput. 12, 2191 (2016)]. Within this framework, by keeping only the electron-hole contributions to the exchange kernel, two approximations can be obtained: An adiabatic connection analog of the second order screened exchange (AC-SOSEX) and an approximate electron-hole time-dependent Hartree-Fock (eh-TDHF). Here we show how this formalism is suitable for an efficient implementation within the plane-wave basis set. The response functions involved in the AC-SOSEX and eh-TDHF equations can indeed be compactly represented by an auxiliary basis set obtained from the diagonalization of an approximate dielectric matrix. Additionally, the explicit calculation of unoccupied states can be avoided by using density functional perturbation theory techniques and the matrix elements of dynamical response functions can be efficiently computed by applying the Lanczos algorithm. As shown by several applications to reaction energies and weakly bound dimers, the inclusion of the electron-hole kernel significantly improves the accuracy of ground-state correlation energies with respect to RPA and semi-local functionals. PMID:27634249
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2013-10-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
Chapman, Larry S; Pelletier, Kenneth R
2004-01-01
This paper provides an (OHE) overview of a population health management (PHM) approach to the creation of optimal healing environments (OHEs) in worksite and corporate settings. It presents a framework for consideration as the context for potential research projects to examine the health, well-being, and economic effects of a set of newer "virtual" prevention interventions operating in an integrated manner in worksite settings. The main topics discussed are the fundamentals of PHM with basic terminology and core principles, a description of PHM core technology and implications of a PHM approach to creating OHEs.
Sleep, fatigue, and medical training: setting an agenda for optimal learning and patient care.
Buysse, Daniel J; Barzansky, Barbara; Dinges, David; Hogan, Eileen; Hunt, Carl E; Owens, Judith; Rosekind, Mark; Rosen, Raymond; Simon, Frank; Veasey, Sigrid; Wiest, Francine
2003-03-15
The difficult issues surrounding discussions of sleep, fatigue, and medical education stem from an ironic biologic truth: physicians share a common physiology with their patients, a physiology that includes an absolute need for sleep and endogenous circadian rhythms governing alertness and performance. We cannot ignore the fact that patients become ill and require medical care at all times of the day and night, but we also cannot escape the fact that providing such care requires that medical professionals, including medical trainees, be awake and functioning at times that are in conflict with their endogenous sleep and circadian physiology. Finally, we cannot avoid the reality that medical education requires long hours in a constrained number of years. Solutions to the problem of sleep and fatigue in medical education will require the active involvement of numerous parties, ranging from trainees themselves to training program directors, hospital administrators, sleep and circadian scientists, and government funding and regulatory agencies. Each of these parties can be informed by previous laboratory and field studies in a variety of operational settings. including medical environments. Education regarding the known effects of sleep. circadian rhythms, and sleep deprivation can help to elevate the general level of discourse and point to potential solutions. Empiric research addressing the effects of sleep loss on patient safety, education outcomes, and resident health is urgently needed: equally important are the development and assessment of innovative countermeasures to maximize performance and learning. Addressing the economic realities of any changes in resident work hours is an essential component of any discussion of these issues. Finally, work-hour regulations may serve as one component of improved sleep and circadian health for medical trainees. but they should not be seen as substitutes for more original solutions that rely less on enforcement and more on
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
NASA Astrophysics Data System (ADS)
Löptien, U.; Dietze, H.
2014-12-01
The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).
NASA Astrophysics Data System (ADS)
Löptien, U.; Dietze, H.
2014-06-01
The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).
Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui; Zhu, Hu
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization.
Enumerating a Diverse Set of Building Designs Using Discrete Optimization: Preprint
Hale, E.; Long, N.
2010-08-01
Numerical optimization is a powerful method for identifying energy-efficient building designs. Automating the search process facilitates the evaluation of many more options than is possible with one-off parametric simulation runs. However, input data uncertainties and qualitative aspects of building design work against standard optimization formulations that return a single, so-called optimal design. This paper presents a method for harnessing a discrete optimization algorithm to obtain significantly different, economically viable building designs that satisfy an energy efficiency goal. The method is demonstrated using NREL's first-generation building analysis platform, Opt- E-Plus, and two example problems. We discuss the information content of the results, and the computational effort required by the algorithm.
NASA Astrophysics Data System (ADS)
Schmitz, G. J.
2016-01-01
The importance of microstructure simulation in integrated computational materials engineering settings in relation to the added value provided for macroscopic process simulation, as well as the contribution this kind of simulation can make in predicting material properties, are discussed. The roles of microstructure simulation in integrating scales ranging from component/process scales down to atomistic scales, and also in integrating experimental and virtual worlds, are highlighted. The hierarchical data format (HDF5) as a basis for enhancing the interoperability of the heterogeneous range of simulation tools and experimental datasets in the area of computational materials engineering is discussed. Several ongoing developments indicate that HDF5 might evolve into a de facto standard for digital microstructure representation of all length scales.
NASA Technical Reports Server (NTRS)
Schwenke, David W.
1992-01-01
The optimization of the wave functions is considered for coupled vibrations represented by linear combinations of products of functions depending only on a single vibrational coordinate. The functions themselves are optimized as well as configuration list. For the H2O molecule highly accurate results are obtained for the lowest 15 levels using significantly shorter expansions than would otherwise be possible.
Kotasidis, Fotis A.; Zaidi, Habib
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function
Jacob, S A; Ng, W L; Do, V
2015-02-01
There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications.
Jacob, S A; Ng, W L; Do, V
2015-02-01
There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. PMID:25455844
Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A
2012-09-01
Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).
On the selection of optimal feature region set for robust digital image watermarking.
Tsai, Jen-Sheng; Huang, Win-Bin; Kuo, Yau-Hwang
2011-03-01
A novel feature region selection method for robust digital image watermarking is proposed in this paper. This method aims to select a nonoverlapping feature region set, which has the greatest robustness against various attacks and can preserve image quality as much as possible after watermarked. It first performs a simulated attacking procedure using some predefined attacks to evaluate the robustness of every candidate feature region. According to the evaluation results, it then adopts a track-with-pruning procedure to search a minimal primary feature set which can resist the most predefined attacks. In order to enhance its resistance to undefined attacks under the constraint of preserving image quality, the primary feature set is then extended by adding into some auxiliary feature regions. This work is formulated as a multidimensional knapsack problem and solved by a genetic algorithm based approach. The experimental results for StirMark attacks on some benchmark images support our expectation that the primary feature set can resist all the predefined attacks and its extension can enhance the robustness against undefined attacks. Comparing with some well-known feature-based methods, the proposed method exhibits better performance in robust digital watermarking.
Creative Self-Efficacy and Innovative Behavior in a Service Setting: Optimism as a Moderator
ERIC Educational Resources Information Center
Hsu, Michael L. A.; Hou, Sheng-Tsung; Fan, Hsueh-Liang
2011-01-01
Creativity research on the personality approach has focused on the relationship between individual attributes and innovative behavior. However, few studies have empirically examined the effects of positive psychological traits on innovative behavior in an organizational setting. This study examines the relationships among creative self-efficacy,…
Optimal Assembly of Tests with Item Sets. Research Report 98-12.
ERIC Educational Resources Information Center
van der Linden, Wim J.
Six methods for assembling tests from a pool with an item-set structure are presented. All methods are computational and based on the technique of mixed integer programming. The methods are evaluated using such criteria as the feasibility of their linear programming problems and their expected solution times. The methods are illustrated for two…
Brigantic, R T; Roggemann, M C; Welsh, B M; Bauer, K W
1998-02-10
We present the results of research aimed at optimizing adaptive-optics closed-loop bandwidth settings to maximize imaging-system performance. The optimum closed-loop bandwidth settings are determined as a function of target-object light levels and atmospheric seeing conditions. Our work shows that, for bright objects, the optimum closed-loop bandwidth is near the Greenwood frequency. However, for dim objects without the use of a laser beacon the preferred closed-loop bandwidth settings are a small fraction of the Greenwood frequency. In addition, under low light levels selection of the proper closed-loop bandwidth is more critical for achieving maximum performance than it is under high light levels. We also present a strategy for selecting the closed-loop bandwidth to provide robust system performance for different target-object light levels.
Alley, William M.
1986-01-01
Problems involving the combined use of contaminant transport models and nonlinear optimization schemes can be very expensive to solve. This paper explores the use of transport models with ordinary regression and regression on ranks to develop approximate response functions of concentrations at critical locations as a function of pumping and recharge at decision wells. These response functions combined with other constraints can often be solved very easily and may suggest reasonable starting points for combined simulation-management modeling or even relatively efficient operating schemes in themselves.
Data set of optimal parameters for colorimetric red assay of epoxide hydrolase activity.
de Oliveira, Gabriel Stephani; Adriani, Patricia Pereira; Borges, Flavia Garcia; Lopes, Adriana Rios; Campana, Patricia T; Chambergo, Felipe S
2016-09-01
The data presented in this article are related to the research article entitled "Epoxide hydrolase of Trichoderma reesei: Biochemical properties and conformational characterization" [1]. Epoxide hydrolases (EHs) are enzymes that catalyze the hydrolysis of epoxides to the corresponding vicinal diols. This article describes the optimal parameters for the colorimetric red assay to determine the enzymatic activity, with an emphasis on the characterization of the kinetic parameters, pH optimum and thermal stability of this enzyme. The effects of reagents that are not resistant to oxidation by sodium periodate on the reactions can generate false positives and interfere with the final results of the red assay. PMID:27366781
Goldfeld, Dahlia A; Bochevarov, Arteum D; Friesner, Richard A
2008-12-01
This paper is a logical continuation of the 22 parameter, localized orbital correction (LOC) methodology that we developed in previous papers [R. A. Friesner et al., J. Chem. Phys. 125, 124107 (2006); E. H. Knoll and R. A. Friesner, J. Phys. Chem. B 110, 18787 (2006).] This methodology allows one to redress systematic density functional theory (DFT) errors, rooted in DFT's inherent inability to accurately describe nondynamical correlation. Variants of the LOC scheme, in conjunction with B3LYP (denoted as B3LYP-LOC), were previously applied to enthalpies of formation, ionization potentials, and electron affinities and showed impressive reduction in the errors. In this paper, we demonstrate for the first time that the B3LYP-LOC scheme is robust across different basis sets [6-31G( *), 6-311++G(3df,3pd), cc-pVTZ, and aug-cc-pVTZ] and reaction types (atomization reactions and molecular reactions). For example, for a test set of 70 molecular reactions, the LOC scheme reduces their mean unsigned error from 4.7 kcal/mol [obtained with B3LYP/6-311++G(3df,3pd)] to 0.8 kcal/mol. We also verified whether the LOC methodology would be equally successful if applied to the promising M05-2X functional. We conclude that although M05-2X produces better reaction enthalpies than B3LYP, the LOC scheme does not combine nearly as successfully with M05-2X than with B3LYP. A brief analysis of another functional, M06-2X, reveals that it is more accurate than M05-2X but its combination with LOC still cannot compete in accuracy with B3LYP-LOC. Indeed, B3LYP-LOC remains the best method of computing reaction enthalpies.
A method of hidden Markov model optimization for use with geophysical data sets
NASA Technical Reports Server (NTRS)
Granat, R. A.
2003-01-01
Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.
The Role of eHealth in Optimizing Preventive Care in the Primary Care Setting
Noble, Natasha; Mansfield, Elise; Waller, Amy; Henskens, Frans; Sanson-Fisher, Rob
2015-01-01
Modifiable health risk behaviors such as smoking, overweight and obesity, risky alcohol consumption, physical inactivity, and poor nutrition contribute to a substantial proportion of the world’s morbidity and mortality burden. General practitioners (GPs) play a key role in identifying and managing modifiable health risk behaviors. However, these are often underdetected and undermanaged in the primary care setting. We describe the potential of eHealth to help patients and GPs to overcome some of the barriers to managing health risk behaviors. In particular, we discuss (1) the role of eHealth in facilitating routine collection of patient-reported data on lifestyle risk factors, and (2) the role of eHealth in improving clinical management of identified risk factors through provision of tailored feedback, point-of-care reminders, tailored educational materials, and referral to online self-management programs. Strategies to harness the capacity of the eHealth medium, including the use of dynamic features and tailoring to help end users engage with, understand, and apply information need to be considered and maximized. Finally, the potential challenges in implementing eHealth solutions in the primary care setting are discussed. In conclusion, there is significant potential for innovative eHealth solutions to make a contribution to improving preventive care in the primary care setting. However, attention to issues such as data security and designing eHealth interfaces that maximize engagement from end users will be important to moving this field forward. PMID:26001983
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Chang, Ni-Bin . E-mail: nchang@even.tamuk.edu; Davila, Eric; Dyson, Brian; Brown, Ron
2005-10-15
Installing material recovery facilities (MRFs) in a solid waste management system could be a feasible alternative to achieve sustainable development goals in urban areas if current household and curbside recycling cannot prove successful in the long run. This paper addresses the optimal site selection and capacity planning for a MRF in conjunction with an optimal shipping strategy of solid waste streams in a multi-district urban region. Screening of material recovery and disposal capacity alternatives can be achieved in terms of economic feasibility, technology limitation, recycling potential, and site availability. The optimization objectives include economic impacts characterized by recycling income and cost components for waste management, while the constraint set consists of mass balance, capacity limitation, recycling limitation, scale economy, conditionality, and relevant screening constraints. A case study for the City of San Antonio, Texas (USA) presents a vivid example where scenario planning demonstrates the robustness and flexibility of this modeling analysis. It proves especially useful when determining MRF ownership structure. Each scenario experiences two case settings: (1) two MRF sites are proposed for selection and (2) a single MRF site is sought. Cost analysis confirms processing fees are not the driving force in the City's operation, but rather shipping cost. Sensitivity analysis solidifies the notion that significant public participation plays the most important role in minimizing solid waste management expenses.
Tong, Xin; Cerný, Jirí; Müller-Dethlefs, Klaus; Dessent, Caroline E H
2008-07-01
Two conformational isomers of the aromatic hydrocarbon n-butylbenzene have been studied using two-color MATI (mass analyzed threshold ionization) spectroscopy to explore the effect of conformation on ionization dynamics. Cationic states of g auche-conformer III and anti- conformers IV were selectively produced by two-color excitation via the respective S 1 origins. Adiabatic ionization potentials of the gauche- and anti-conformations were determined to be 70146 and 69872 +/- 5 cm (-1) respectively. Spectral features and vibrational modes are interpreted with the aid of MP2/cc-pVDZ ab initio calculations, and ionization-induced changes in the molecular conformations are discussed. Complete basis set (CBS) ab initio studies at MP2 level reveal reliable energetics for all four n-butylbenzene conformers observed in earlier two-color REMPI (resonance enhanced multiphoton ionization) spectra. For the S 0 state, the energies of conformer III, IV and V are above conformer I by 130, 289, 73 cm (-1), respectively. Furthermore, the combination of the CBS calculations with the measured REMPI, MATI spectra allowed the determination of the energetics of all four conformers in the S 1 and D 0 states.
Tong, Xin; Cerný, Jirí; Müller-Dethlefs, Klaus; Dessent, Caroline E H
2008-07-01
Two conformational isomers of the aromatic hydrocarbon n-butylbenzene have been studied using two-color MATI (mass analyzed threshold ionization) spectroscopy to explore the effect of conformation on ionization dynamics. Cationic states of g auche-conformer III and anti- conformers IV were selectively produced by two-color excitation via the respective S 1 origins. Adiabatic ionization potentials of the gauche- and anti-conformations were determined to be 70146 and 69872 +/- 5 cm (-1) respectively. Spectral features and vibrational modes are interpreted with the aid of MP2/cc-pVDZ ab initio calculations, and ionization-induced changes in the molecular conformations are discussed. Complete basis set (CBS) ab initio studies at MP2 level reveal reliable energetics for all four n-butylbenzene conformers observed in earlier two-color REMPI (resonance enhanced multiphoton ionization) spectra. For the S 0 state, the energies of conformer III, IV and V are above conformer I by 130, 289, 73 cm (-1), respectively. Furthermore, the combination of the CBS calculations with the measured REMPI, MATI spectra allowed the determination of the energetics of all four conformers in the S 1 and D 0 states. PMID:18533642
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M; Vargas, Rubicelia; Garza, Jorge
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M.; Vargas, Rubicelia; Garza, Jorge
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Monari, Antonio; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Angeli, Celestino; Ben Amor, Nadia; Borini, Stefano; Maynau, Daniel; Rossi, Elda
2007-03-01
The dispersion interactions of the Ne2 dimer were studied using both the long-range perturbative and supramolecular approaches: for the long-range approach, full CI or string-truncated CI methods were used, while for the supramolecular treatments, the energy curves were computed by using configuration interaction with single and double excitation (CISD), coupled cluster with single and double excitation, and coupled-cluster with single and double (and perturbative) triple excitations. From the interatomic potential-energy curves obtained by the supramolecular approach, the C6 and C8 dispersion coefficients were computed via an interpolation scheme, and they were compared with the corresponding values obtained within the long-range perturbative treatment. We found that the lack of size consistency of the CISD approach makes this method completely useless to compute dispersion coefficients even when the effect of the basis-set superposition error on the dimer curves is considered. The largest full-CI space we were able to use contains more than 1 billion symmetry-adapted Slater determinants, and it is, to our knowledge, the largest calculation of second-order properties ever done at the full-CI level so far. Finally, a new data format and libraries (Q5Cost) have been used in order to interface different codes used in the present study.
Richard, Ryan M; Marshall, Michael S; Dolgounitcheva, O; Ortiz, J V; Brédas, Jean-Luc; Marom, Noa; Sherrill, C David
2016-02-01
In designing organic materials for electronics applications, particularly for organic photovoltaics (OPV), the ionization potential (IP) of the donor and the electron affinity (EA) of the acceptor play key roles. This makes OPV design an appealing application for computational chemistry since IPs and EAs are readily calculable from most electronic structure methods. Unfortunately reliable, high-accuracy wave function methods, such as coupled cluster theory with single, double, and perturbative triples [CCSD(T)] in the complete basis set (CBS) limit are too expensive for routine applications to this problem for any but the smallest of systems. One solution is to calibrate approximate, less computationally expensive methods against a database of high-accuracy IP/EA values; however, to our knowledge, no such database exists for systems related to OPV design. The present work is the first of a multipart study whose overarching goal is to determine which computational methods can be used to reliably compute IPs and EAs of electron acceptors. This part introduces a database of 24 known organic electron acceptors and provides high-accuracy vertical IP and EA values expected to be within ±0.03 eV of the true non-relativistic, vertical CCSD(T)/CBS limit. Convergence of IP and EA values toward the CBS limit is studied systematically for the Hartree-Fock, MP2 correlation, and beyond-MP2 coupled cluster contributions to the focal point estimates. PMID:26731487
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
DeBaun, M.R.; Sox, H.C. Jr. )
1991-07-01
Erythrocyte protoporphyrin (EP) was introduced in the 1970s as an inexpensive screening test for lead poisoning. As greater knowledge of lead poisoning has accumulated, the recommended EP level at which further evaluation for lead poisoning should be initiated has been lowered from greater than or equal to 50 micrograms/dL to greater than or equal to 35 micrograms/dL. The purpose of this study was to evaluate the utility of this EP threshold. A receiver operator characteristic curve was constructed to assess the relationship between the true-positive rate and false-positive rate of EP at various decision thresholds. The receiver operator characteristic curve was constructed with data from the second National Health and Nutrition Examination Survey from 1976 to 1980, which included 2673 children 6 years of age or younger who had both blood lead and EP level determinations. Decision analysis was then used to determine the optimal EP decision threshold for detecting a blood lead level greater than or equal to 25 micrograms/dL. The receiver operator characteristic curve demonstrated that EP is a poor predictor of a blood lead level greater than or equal to 25 micrograms/dL. At the currently recommended EP decision threshold of 35 micrograms/dL, the true-positive rates and false-positive rates of EP are 0.23 and 0.04, respectively. As a result of the inadequate performance of EP screening for lead poisoning, when the prevalence of lead poisoning is greater than 8%, there is no EP decision threshold that optimizes the relationship between the cost of screening normal children and the benefit of detecting lead-poisoned children. Erythrocyte protoporphyrin measurement is not sufficiently sensitive to be recommended uniformly as a screening test for lead poisoning.
Pennings, Jeroen L A; Theunissen, Peter T; Piersma, Aldert H
2012-10-28
The murine neural embryonic stem cell test (ESTn) is an in vitro model for neurodevelopmental toxicity testing. Recent studies have shown that application of transcriptomics analyses in the ESTn is useful for obtaining more accurate predictions as well as mechanistic insights. Gene expression responses due to stem cell neural differentiation versus toxicant exposure could be distinguished using the Principal Component Analysis based differentiation track algorithm. In this study, we performed a de novo analysis on combined raw data (10 compounds, 19 exposures) from three previous transcriptomics studies to identify an optimized gene set for neurodevelopmental toxicity prediction in the ESTn. By evaluating predictions of 200,000 randomly selected gene sets, we identified genes which significantly contributed to the prediction reliability. A set of 100 genes was obtained, predominantly involved in (neural) development. Further stringency restrictions resulted in a set of 29 genes that allowed for 84% prediction accuracy (area under the curve 94%). We anticipate these gene sets will contribute to further improve ESTn transcriptomics studies aimed at compound risk assessment.
NASA Astrophysics Data System (ADS)
Zhao, Jianhu; Wang, Xiao; Zhang, Hongmei; Hu, Jun; Jian, Xiaomin
2016-06-01
To fulfill side scan sonar (SSS) image segmentation accurately and efficiently, a novel segmentation algorithm based on neutrosophic set (NS) and quantum-behaved particle swarm optimization (QPSO) is proposed in this paper. Firstly, the neutrosophic subset images are obtained by transforming the input image into the NS domain. Then, a co-occurrence matrix is accurately constructed based on these subset images, and the entropy of the gray level image is described to serve as the fitness function of the QPSO algorithm. Moreover, the optimal two-dimensional segmentation threshold vector is quickly obtained by QPSO. Finally, the contours of the interested target are segmented with the threshold vector and extracted by the mathematic morphology operation. To further improve the segmentation efficiency, the single threshold segmentation, an alternative algorithm, is recommended for the shadow segmentation by considering the gray level characteristics of the shadow. The accuracy and efficiency of the proposed algorithm are assessed with experiments of SSS image segmentation.
NASA Astrophysics Data System (ADS)
Zhao, Jianhu; Wang, Xiao; Zhang, Hongmei; Hu, Jun; Jian, Xiaomin
2016-09-01
To fulfill side scan sonar (SSS) image segmentation accurately and efficiently, a novel segmentation algorithm based on neutrosophic set (NS) and quantum-behaved particle swarm optimization (QPSO) is proposed in this paper. Firstly, the neutrosophic subset images are obtained by transforming the input image into the NS domain. Then, a co-occurrence matrix is accurately constructed based on these subset images, and the entropy of the gray level image is described to serve as the fitness function of the QPSO algorithm. Moreover, the optimal two-dimensional segmentation threshold vector is quickly obtained by QPSO. Finally, the contours of the interested target are segmented with the threshold vector and extracted by the mathematic morphology operation. To further improve the segmentation efficiency, the single threshold segmentation, an alternative algorithm, is recommended for the shadow segmentation by considering the gray level characteristics of the shadow. The accuracy and efficiency of the proposed algorithm are assessed with experiments of SSS image segmentation.
NASA Astrophysics Data System (ADS)
Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas
2014-12-01
Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.
Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Wilson, S; Tod, M; Ouerdani, A; Emde, A; Yarden, Y; Adda Berkane, A; Kassour, S; Wei, M X; Freyer, G; You, B; Grenier, E; Ribba, B
2015-12-01
We present a system of nonlinear ordinary differential equations used to quantify the complex dynamics of the interactions between tumor growth, vasculature generation, and antiangiogenic treatment. The primary dataset consists of longitudinal tumor size measurements (1,371 total observations) in 105 colorectal tumor-bearing mice. Mice received single or combination administration of sunitinib, an antiangiogenic agent, and/or irinotecan, a cytotoxic agent. Depending on the dataset, parameter estimation was performed either using a mixed-effect approach or by nonlinear least squares. Through a log-likelihood ratio test, we conclude that there is a potential synergistic interaction between sunitinib when administered in combination with irinotecan in preclinical settings. Model simulations were then compared to data from a follow-up preclinical experiment. We conclude that the model has predictive value in identifying the therapeutic window in which the timing between the administrations of these two drugs is most effective. PMID:26904386
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-08-01
A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Duwal, Sulav; Winkelmann, Stefanie; Schütte, Christof; von Kleist, Max
2015-01-01
An estimated 2.7 million new HIV-1 infections occurred in 2010. `Treatment-for-prevention’ may strongly prevent HIV-1 transmission. The basic idea is that immediate treatment initiation rapidly decreases virus burden, which reduces the number of transmittable viruses and thereby the probability of infection. However, HIV inevitably develops drug resistance, which leads to virus rebound and nullifies the effect of `treatment-for-prevention’ for the time it remains unrecognized. While timely conducted treatment changes may avert periods of viral rebound, necessary treatment options and diagnostics may be lacking in resource-constrained settings. Within this work, we provide a mathematical platform for comparing different treatment paradigms that can be applied to many medical phenomena. We use this platform to optimize two distinct approaches for the treatment of HIV-1: (i) a diagnostic-guided treatment strategy, based on infrequent and patient-specific diagnostic schedules and (ii) a pro-active strategy that allows treatment adaptation prior to diagnostic ascertainment. Both strategies are compared to current clinical protocols (standard of care and the HPTN052 protocol) in terms of patient health, economic means and reduction in HIV-1 onward transmission exemplarily for South Africa. All therapeutic strategies are assessed using a coarse-grained stochastic model of within-host HIV dynamics and pseudo-codes for solving the respective optimal control problems are provided. Our mathematical model suggests that both optimal strategies (i)-(ii) perform better than the current clinical protocols and no treatment in terms of economic means, life prolongation and reduction of HIV-transmission. The optimal diagnostic-guided strategy suggests rare diagnostics and performs similar to the optimal pro-active strategy. Our results suggest that ‘treatment-for-prevention’ may be further improved using either of the two analyzed treatment paradigms. PMID:25927964
NASA Astrophysics Data System (ADS)
Tsimpidi, A. P.; Karydis, V. A.; Pandis, S. N.; Zavala, M.; Lei, W.; Molina, L. T.
2007-12-01
Anthropogenic air pollution is an increasingly serious problem for public health, agriculture, and global climate. Organic material (OM) contributes ~ 20-50% to the total fine aerosol mass at continental mid-latitudes. Although OM accounts for a large fraction of PM2.5 concentration worldwide, the contributions of primary and secondary organic aerosol have been difficult to quantify. In this study, new primary and secondary organic aerosol modules were added to PMCAMx, a three dimensional chemical transport model (Gaydos et al., 2007), for use with the SAPRC99 chemistry mechanism (Carter, 2000; ENVIRON, 2006) based on recent smog chamber studies (Robinson et al., 2007). The new modeling framework is based on the volatility basis-set approach (Lane et al., 2007): both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The emission inventory, which uses as starting point the MCMA 2004 official inventory (CAM, 2006), is modified and the primary organic aerosol (POA) emissions are distributed by volatility based on dilution experiments (Robinson et al., 2007). Sensitivity tests where POA is considered as nonvolatile and POA and SOA as chemically reactive are also described. In all cases PMCAMx is applied in the Mexico City Metropolitan Area during March 2006. The modeling domain covers a 180x180x6 km region in the MCMA with 3x3 km grid resolution. The model predictions are compared with Aerodyne's Aerosol Mass Spectrometry (AMS) observations from the MILAGRO Campaign. References Robinson, A. L.; Donahue, N. M.; Shrivastava, M. K.; Weitkamp, E. A.; Sage, A. M.; Grieshop, A. P.; Lane, T. E.; Pandis, S. N.; Pierce, J. R., 2007. Rethinking organic aerosols: semivolatile emissions and photochemical aging. Science 315, 1259-1262. Gaydos, T. M.; Pinder, R. W.; Koo, B.; Fahey, K. M.; Pandis, S. N., 2007. Development and application of a three- dimensional aerosol
NASA Astrophysics Data System (ADS)
Zhang, Q. J.; Beekmann, M.; Drewnick, F.; Freutel, F.; Schneider, J.; Crippa, M.; Prevot, A. S. H.; Baltensperger, U.; Poulain, L.; Wiedensohler, A.; Sciare, J.; Gros, V.; Borbon, A.; Colomb, A.; Michoud, V.; Doussin, J.-F.; Denier van der Gon, H. A. C.; Haeffelin, M.; Dupont, J.-C.; Siour, G.; Petetin, H.; Bessagnet, B.; Pandis, S. N.; Hodzic, A.; Sanchez, O.; Honoré, C.; Perrussel, O.
2013-06-01
Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS) is implemented into this model, taking into account the volatility of primary organic aerosol (POA) and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA) formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC) shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization), to emission input data, and to simulated OH levels can be responsible for this behavior
NASA Astrophysics Data System (ADS)
Knote, C. J.; Hodzic, A.; Aumont, B.; Madronich, S.
2014-12-01
Traditional understanding views secondary organic aerosol (SOA) formation in the atmosphere as continuous gas-phase oxidation of precursors such as isoprene, aromatics or alkanes. Recent research found that these oxidation products are also highly water soluble. It is further understood that the liquid-phase of cloud droplets as well as deliquesced particles could mediate SOA formation through chemistry in the aqueous-phase. While the effect of multi-phase processing has been studied in detailed for specific compounds like glyoxal or methylglyoxal, an integrated approach that considers the large number of individual compounds has been missing due to the complexity involved. In our work we explore the effects of multi-phase processing on secondary organic aerosol from an explicit modeling perspective.Volatility and solubility determine in which phase a given molecule will be found under given atmospheric conditions. Volatility has already been used to simplify the description of SOA formation in the gas-phase in what became known as the Volatility Basis Set approach (VBS). Compounds contributing to SOA formation are grouped by volatility and then treated as a whole. A number of studies extended the VBS by adding a second dimension like oxygen to carbon ratio or the mean oxidation state. In our work we use functional groups as second dimension.Using explicit oxidation chemistry modeling (GECKO-A) we derive SOA yields as well as their composition in terms of functional groups for commonly used precursors. We then investigate the effect of simply partitioning functional-group specific organic mass into cloud droplets and deliquesced aerosol based on their estimated solubility. Finally we apply simple chemistry in the aqueous-phase and relate changes in functional groups to changes in volatility and subsequent changes in partitioning between gas- and aerosol-phase.In our presentation we will explore the sensitivites of the multi-phase system in a box model setting with
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Duhamel, D.; Funfschilling, C.
2013-06-01
Due to scaling effects, when dealing with vector-valued random fields, the classical Karhunen-Loève expansion, which is optimal with respect to the total mean square error, tends to favorize the components of the random field that have the highest signal energy. When these random fields are to be used in mechanical systems, this phenomenon can introduce undesired biases for the results. This paper presents therefore an adaptation of the Karhunen-Loève expansion that allows us to control these biases and to minimize them. This original decomposition is first analyzed from a theoretical point of view, and is then illustrated on a numerical example.
Matsui, H.; Koike, Makoto; Kondo, Yutaka; Takami, A.; Fast, Jerome D.; Kanaya, Y.; Takigawa, M.
2014-09-16
Organic aerosol (OA) simulations using the volatility basis-set approach were made for East Asia and its outflow region. Model simulations were evaluated through comparisons with OA measured by aerosol mass spectrometers in and around Tokyo (at Komaba and Kisai in summer 2003 and 2004) and over the outflow region in East Asia (at Fukue and Hedo in spring 2009). The simulations with aging processes of organic vapors reasonably well reproduced mass concentrations, temporal variations, and formation efficiency of observed OA at all sites. As OA mass was severely underestimated in the simulations without the aging processes, the oxidations of organic vapors are essential for reasonable OA simulations over East Asia. By considering the aging processes, simulated OA concentrations considerably increased from 0.24 to 1.28 µg m-3 in the boundary layer over the whole of East Asia. OA formed from the interaction of anthropogenic and biogenic sources was also enhanced by the aging processes. The fraction of controllable OA was estimated to be 87 % of total OA over the whole of East Asia, showing that most of the OA in our simulations formed anthropogenically (controllable). A large portion of biogenic secondary OA (78 % of biogenic secondary OA) formed through the influence of anthropogenic sources. The high fraction of controllable OA in our simulations is likely because anthropogenic emissions are dominant over East Asia and OA formation is enhanced by anthropogenic sources and their aging processes. Both the amounts (from 0.18 to 1.12 µg m-3) and the fraction (from 75 % to 87 %) of controllable OA were increased by aging processes of organic vapors over East Asia.
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure.
NASA Astrophysics Data System (ADS)
Maroulis, George
1998-04-01
The electric multipole moments, dipole and quadrupole polarizability and hyperpolarizability of hydrogen chloride have been determined from an extensive and systematic study based on finite-field fourth-order many-body perturbation theory and coupled-cluster calculations. Our best values for the dipole, quadrupole, octopole and hexadecapole moment at the experimental internuclear separation of Re=2.408645a0 are μ=0.4238ea0, Θ=2.67ea02, Ω=3.94ea03, and Φ=13.37ea04, respectively. For the mean and the anisotropy of the dipole polarizability ααβ we recommend ᾱ=17.41±0.02 and Δα=1.60±0.03e2a02Eh-1. For the mean value of the first dipole hyperpolarizability βαβγ we advance β¯=-6.8±0.3e3a03Eh-2. Extensive calculations with a [8s6p6d3f/5s4p2d1f] basis set at the CCSD(T) level of theory yield the R-dependence of the Cartesian components and the mean of the second dipole hyperpolarizability γαβγδ(R)/e4a04Eh-3 around Re as γzzzz(R)=1907+1326(R-Re)+570(R-Re)2+10(R-Re)3-40(R-Re)4, γxxxx(R)=3900+747(R-Re)-65(R-Re)2-38(R-Re)3-7(R-Re)4, γxxzz(R)=962+222(R-Re)+88(R-Re)2+49(R-Re)3+5(R-Re)4, γ¯(R)=3230+841(R-Re)+151(R-Re)2+21(R-Re)3-9(R-Re)4, with z as the molecular axis. The present investigation suggests an estimate of (26.7±0.3)×102e4a04Eh-3 for the Hartree-Fock limit of the mean value γ¯ at Re. CCSD(T) calculations with basis sets of [8s6p6d3f/5s4p2d1f] and [9s7p5d4f/6s5p4d1f] size and MP4 calculations with the even larger [15s12p7d3f/12s7p2d1f] give (7.0±0.3)×102e4a04Eh-3 for the electron correlation effects for this property, thus leading to a recommended value of γ¯=(33.7±0.6)×102e4a04Eh-3. For the quadrupole polarizability Cαβ,γδ/e2a04Eh-1 at Re our best values are Czz,zz=41.68, Cxz,xz=26.11, and Cxx,xx=35.38, calculated with the [9s7p5d4f/6s5p4d1f] basis set at the CCSD(T) level of theory. The following CCSD(T) values were obtained with [8s6p6d3f/5s4p2d1f] at Re: dipole-quadrupole polarizability Aα,βγ/e2a03Eh-1, Az,zz=14.0, and
Myosin-II sets the optimal response time scale of chemotactic amoeba
NASA Astrophysics Data System (ADS)
Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Bodenschatz, Eberhard; Beta, Carsten
2014-03-01
The response dynamics of the actin cytoskeleton to external chemical stimuli plays a fundamental role in numerous cellular functions. One of the key players that governs the dynamics of the actin network is the motor protein myosin-II. Here we investigate the role of myosin-II in the response of the actin system to external stimuli. We used a microfluidic device in combination with a photoactivatable chemoattractant to apply stimuli to individual cells with high temporal resolution. We directly compare the actin dynamics in Dictyostelium discodelium wild type (WT) cells to a knockout mutant that is deficient in myosin-II (MNL). Similar to the WT a small population of MNL cells showed self-sustained oscillations even in absence of external stimuli. The actin response of MNL cells to a short pulse of chemoattractant resembles WT during the first 15 sec but is significantly delayed afterward. The amplitude of the dominant peak in the power spectrum from the response time series of MNL cells to periodic stimuli with varying period showed a clear resonance peak at a forcing period of 36 sec, which is significantly delayed as compared to the resonance at 20 sec found for the WT. This shift indicates an important role of myosin-II in setting the response time scale of motile amoeba. Institute of Physics und Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany.
Optimization of Geriatric Pharmacotherapy: Role of Multifaceted Cooperation in the Hospital Setting.
Petrovic, Mirko; Somers, Annemie; Onder, Graziano
2016-03-01
Because older patients are more vulnerable to adverse drug-related events, there is a need to ensure appropriate pharmacotherapy in these patients. This narrative review describes approaches to improve pharmacotherapy in older people in the hospital setting. Screening to identify older patients at risk of drug-related problems and adverse drug reactions (ADRs) is the first critical step within a multistep approach to geriatric pharmacotherapy. Two methods that have been developed are the GerontoNet ADR risk score and the Brighton Adverse Drug Reactions Risk (BADRI) model, which take into account a number of factors, the most important of which is the number of medicines. In order to reduce potentially inappropriate prescribing in older patients, different types of interventions exist, such as pharmacist-led medication reviews, educational interventions, computerized decision support systems, and comprehensive geriatric assessment. The effects of these interventions have been studied, sometimes in a multifaceted approach, by combining different techniques. None of the existing interventions shows a clear beneficial effect on patients' health outcomes if applied in isolation; however, when these interventions are combined within the context of a multidisciplinary team, positive effects on patients' health outcomes can be expected. Appropriate geriatric pharmacotherapy, global assessment of patients' clinical and functional parameters, and integration of skills from different healthcare professionals are needed to address medical complexity of older adults. PMID:26884392
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
NASA Astrophysics Data System (ADS)
Ilma Rahmillah, Fety
2016-01-01
The working environment is one factor that has contribution to the worker's performance, especially for continuous and monotonous works. L9 Taguchi design experiment for inner array is used to design the experiment which was carried out in laboratory whereas L4 is for outer array. Four control variables with three levels of each are used to get the optimal combination of working environment setting. Four responses are also measured to know the effect of four control factors. Results shown that by using ANOVA, the effect of illumination, temperature, and instrumental music to the number of ouput, number of error, and rating perceived discomfort is significant with the total variance explained of 54,67%, 60,67%, and 75,22% respectively. By using VIKOR method, it yields the optimal combination of experiment 66 with the setting condition of A3-B2-C1-D3. The illumination is 325-350 lux, temperature is 240-260C, fast category of instrumental music, and 70-80 dB for intensity of the music being played.
Cho, P S; Lee, S; Marks, R J; Oh, S; Sutlief, S G; Phillips, M H
1998-04-01
For accurate prediction of normal tissue tolerance, it is important that the volumetric information of dose distribution be considered. However, in dosimetric optimization of intensity modulated beams, the dose-volume factor is usually neglected. In this paper we describe two methods of volume-dependent optimization for intensity modulated beams such as those generated by computer-controlled multileaf collimators. The first method uses a volume sensitive penalty function in which fast simulated annealing is used for cost function minimization (CFM). The second technique is based on the theory of projections onto convex sets (POCS) in which the dose-volume constraint is replaced by a limit on integral dose. The ability of the methods to respect the dose-volume relationship was demonstrated by using a prostate example involving partial volume constraints to the bladder and the rectum. The volume sensitive penalty function used in the CFM method can be easily adopted by existing optimization programs. The convex projection method can find solutions in much shorter time with minimal user interaction.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different
Wang, Zheng-ming; Wang, Wei-wei
2009-07-01
A novel fast and adaptive method for synthetic aperture radar (SAR) superresolution imaging is developed. Based on the point scattering model in the phase history domain, a dictionary is constructed so that the superresolution imaging process can be converted to a problem of sparse parameter estimation. The approximate orthogonality of this dictionary is exploited by theoretical derivation and experimental verification. Based on the orthogonality of the dictionary, we propose a fast algorithm for basis selection. Meanwhile, a threshold for obtaining the number and positions of the scattering centers is determined automatically from the inner product curves of the bases and observed data. Furthermore, the sensitivity of the threshold on estimation performance is analyzed. To reduce the burden of mass calculation and memory, a simplified superresolution imaging process is designed according to the characteristics of the imaging parameters. The experimental results of the simulated images and an MSTAR image illustrate the validity of this method and its robustness in the case of the high noise level. Compared with the traditional regularization method with the sparsity constraint, our proposed method suffers less computation complexity and has better adaptability.
Solomonik, Victor G; Stanton, John F; Boggs, James E
2005-03-01
The molecular equilibrium geometries, quadratic and cubic force constants, vibrational frequencies, and infrared intensities of scandium and iron trifluorides are determined ab initio with a sequence of atomic natural orbital basis sets using the CCSD(T) treatment of electron correlation. The largest basis set of spdf ghi quality contains 462 contracted Gaussian functions. Relativistic corrections are applied to compute the equilibrium geometries and vibrational frequencies. The cubic force constants are used to estimate vibrational corrections to the effective r(g) internuclear distances determined in the gas electron diffraction experiments. The computed molecular properties are extrapolated to the complete basis-set limit. The predicted values are compared to the available experimental data; uncertainties and inconsistencies in these data are then discussed. PMID:15836143
Snijders, J.G.; Vernooijs, P.; Baerends, E.J.
1981-11-01
Basis-set expansions are presented for the Hartree--Fock--Slater (HFS) orbitals of the neutral elements Fr--Lr (Z = 87--103). The Slater-type functions used in these expansions are found by an efficient fitting procedure to the Herman--Skillman numerical HFS orbitals. The expansions are of single-zera, double-zeta, and triple-zeta-valence (extended) quality. Comparisons of orbital energies with the numerical values are given for all elements. similar basis sets for all the remaining elements are available on request.
Huang, Xinchuan; Valeev, Edward F; Lee, Timothy J
2010-12-28
One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H(2)O, N(2)H(+), NO(2)(+), and C(2)H(2) molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N(2)H(+) where it is concluded that basis set extrapolation is still preferred. The differences for H(2)O and NO(2)(+) are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C(2)H(2), however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)(R12), incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N(2)H(+) and NO(2)(+) were computed, including basis set extrapolation, core-correlation, scalar
NASA Astrophysics Data System (ADS)
Kleidon, Axel; Renner, Maik
2016-04-01
, which then links this thermodynamic approach to optimality in vegetation. We also contrast this approach to common, semi-empirical approaches of surface-atmosphere exchange and discuss how thermodynamics may set a broader range of transport limitations and optimality in the soil-plant-atmosphere system.
Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang
2012-06-01
Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets. PMID:22291156
Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang
2012-06-01
Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.
Sunagawa, Yoichi; Sono, Shogo; Katanasaka, Yasufumi; Funamoto, Masafumi; Hirano, Sae; Miyazaki, Yusuke; Hojo, Yuya; Suzuki, Hidetoshi; Morimoto, Eriko; Marui, Akira; Sakata, Ryuzo; Ueno, Morio; Kakeya, Hideaki; Wada, Hiromichi; Hasegawa, Koji; Morimoto, Tatsuya
2014-01-01
A natural p300-specific histone acetyltransferase inhibitor, curcumin, may have a therapeutic potential for heart failure. However, a study of curcumin to identify an appropriate dose for heart failure has yet to be performed. Rats were subjected to a left coronary artery ligation. One week later, rats with a moderate severity of myocardial infarction (MI) were randomly assigned to 4 groups receiving the following: a solvent as a control, a low dose of curcumin (0.5 mg∙kg(-1)∙day(-1)), a medium dose of curcumin (5 mg∙kg(-1)∙day(-1)), or a high dose of curcumin (50 mg∙kg(-1)∙day(-1)). Daily oral treatment was continued for 6 weeks. After treatment, left ventricular (LV) fractional shortening was dose-dependently improved in the high-dose (25.2% ± 1.6%, P < 0.001 vs. vehicle) and medium-dose (19.6% ± 2.4%) groups, but not in the low-dose group (15.5% ± 1.4%) compared with the vehicle group (15.1% ± 0.8%). The histological cardiomyocyte diameter and perivascular fibrosis as well as echocardiographic LV posterior wall thickness dose-dependently decreased in the groups receiving high and medium doses. The beneficial effects of oral curcumin on the post-MI LV systolic function are lower at 5 compared to 50 mg∙kg(-1)∙day(-1) and disappear at 0.5 mg∙kg(-1)∙day(-1). To clinically apply curcumin therapy for heart failure patients, a precise, optimal dose-setting study is required.
Guo, Hui; Liu, Wen-Ya; He, Xiao-Ye; Zhou, Xiao-Shan; Zeng, Qun-Li
2013-01-01
Objective The quality and radiation dose of different tube voltage sets for chest digital radiography (DR) were compared in a series of pediatric age groups. Materials and Methods Forty-five hundred children aged 0-14 years (yr) were randomly divided into four groups according to the tube voltage protocols for chest DR: lower kilovoltage potential (kVp) (A), intermediate kVp (B), and higher kVp (C) groups, and the fixed high kVp group (controls). The results were analyzed among five different age groups (0-1 yr, 1-3 yr, 3-7 yr, 7-11 yr and 11-14 yr). The dose area product (DAP) and visual grading analysis score (VGAS) were determined and compared by using one-way analysis of variance. Results The mean DAP of protocol C was significantly lower as compared with protocols A, B and controls (p < 0.05). DAP was higher in protocol A than the controls (p <0.001), but it was not statistically significantly different between B and the controls (p = 0.976). Mean VGAS was lower in the controls than all three protocols (p < 0.001 for all). Mean VGAS did not differ between protocols A and B (p = 0.334), but was lower in protocol C than A (p = 0.008) and B (p = 0.049). Conclusion Protocol C (higher kVp) may help optimize the trade-off between radiation dose and image quality, and it may be acceptable for use in a pediatric age group from these results. PMID:23323043
NASA Astrophysics Data System (ADS)
Kobus, J.; Moncrieff, D.; Wilson, S.
2007-03-01
We investigate the accuracy with which the electric dipole polarizability, αzz, and the hyperpolarizability, βzzz, can be calculated by using the algebraic approximation, i.e. finite basis set expansions, and by means of the finite difference method in calculations for the ground states of the 14 electron systems N2, CO and BF within the Hartree-Fock model at their respective experimental equilibrium geometries. For a well-chosen grid, the finite difference technique can provide Hartree-Fock energy and dipole moment expectation values approaching machine precision which can be used to assess the accuracy of corresponding calculations carried out within the algebraic approximation. The finite field approximation is used to determine polarizabilities and hyperpolarizabilities from finite difference Hartree-Fock dipole moment expectation values. The results are compared with finite basis set calculations of the corresponding quantities which are carried out analytically using coupled perturbed Hartree-Fock theory. For the N2 molecule, the Hartree-Fock polarizability is found to be 14.9512 au within the finite basis set approximation and 14.945 au within the finite difference approach. For the CO molecule, the corresponding results are 14.4668 au and 14.4668 au, whilst for the BF molecule the values are 16.6450 au and 16.6450 au, respectively. The Hartree-Fock hyperpolarizability of the CO molecule is found to be 31.4081 au and 31.411 au within the finite basis set and finite difference approximations, respectively. The corresponding hyperpolarizability values for the BF molecule are 63.9687 au and 63.969 au, respectively. This paper is dedicated to Victor R Saunders, on his official retirement from Daresbury Laboratory.
Egetemeir, Johanna; Stenneken, Prisca; Koehler, Saskia; Fallgatter, Andreas J.; Herrmann, Martin J.
2011-01-01
Many every-day life situations require two or more individuals to execute actions together. Assessing brain activation during naturalistic tasks to uncover relevant processes underlying such real-life joint action situations has remained a methodological challenge. In the present study, we introduce a novel joint action paradigm that enables the assessment of brain activation during real-life joint action tasks using functional near-infrared spectroscopy (fNIRS). We monitored brain activation of participants who coordinated complex actions with a partner sitting opposite them. Participants performed table setting tasks, either alone (solo action) or in cooperation with a partner (joint action), or they observed the partner performing the task (action observation). Comparing joint action and solo action revealed stronger activation (higher [oxy-Hb]-concentration) during joint action in a number of areas. Among these were areas in the inferior parietal lobule (IPL) that additionally showed an overlap of activation during action observation and solo action. Areas with such a close link between action observation and action execution have been associated with action simulation processes. The magnitude of activation in these IPL areas also varied according to joint action type and its respective demand on action simulation. The results validate fNIRS as an imaging technique for exploring the functional correlates of interindividual action coordination in real-life settings and suggest that coordinating actions in real-life situations requires simulating the actions of the partner. PMID:21927603
Sherrill, C David; Takatani, Tait; Hohenstein, Edward G
2009-09-24
Large, correlation-consistent basis sets have been used to very closely approximate the coupled-cluster singles, doubles, and perturbative triples [CCSD(T)] complete basis set potential energy curves of several prototype nonbonded complexes, the sandwich, T-shaped, and parallel-displaced benzene dimers, the methane-benzene complex, the H2S-benzene complex, and the methane dimer. These benchmark potential energy curves are used to assess the performance of several methods for nonbonded interactions, including various spin-component-scaled second-order perturbation theory (SCS-MP2) methods, the spin-component-scaled coupled-cluster singles and doubles method (SCS-CCSD), density functional theory empirically corrected for dispersion (DFT-D), and the meta-generalized-gradient approximation functionals M05-2X and M06-2X. These approaches generally provide good results for the test set, with the SCS methods being somewhat more robust. M05-2X underbinds for the test cases considered, while the performances of DFT-D and M06-2X are similar. Density fitting, dual basis, and local correlation approximations all introduce only small errors in the interaction energies but can speed up the computations significantly, particulary when used in combination.
Hendriksen, Ilse C. E.; White, Lisa J.; Veenemans, Jacobien; Mtove, George; Woodrow, Charles; Amos, Ben; Saiwaew, Somporn; Gesase, Samwel; Nadjm, Behzad; Silamut, Kamolrat; Joseph, Sarah; Chotivanich, Kesinee; Day, Nicholas P. J.; von Seidlein, Lorenz; Verhoef, Hans; Reyburn, Hugh; White, Nicholas J.; Dondorp, Arjen M.
2013-01-01
Background. In malaria-endemic settings, asymptomatic parasitemia complicates the diagnosis of malaria. Histidine-rich protein 2 (HRP2) is produced by Plasmodium falciparum, and its plasma concentration reflects the total body parasite burden. We aimed to define the malaria-attributable fraction of severe febrile illness, using the distributions of plasma P. falciparum HRP2 (PfHRP2) concentrations from parasitemic children with different clinical presentations. Methods. Plasma samples were collected from and peripheral blood slides prepared for 1435 children aged 6−60 months in communities and a nearby hospital in northeastern Tanzania. The study population included children with severe or uncomplicated malaria, asymptomatic carriers, and healthy control subjects who had negative results of rapid diagnostic tests. The distributions of plasma PfHRP2 concentrations among the different groups were used to model severe malaria-attributable disease. Results. The plasma PfHRP2 concentration showed a close correlation with the severity of infection. PfHRP2 concentrations of >1000 ng/mL denoted a malaria-attributable fraction of severe disease of 99% (95% credible interval [CI], 96%–100%), with a sensitivity of 74% (95% CI, 72%–77%), whereas a concentration of <200 ng/mL denoted severe febrile illness of an alternative diagnosis in >10% (95% CI, 3%–27%) of patients. Bacteremia was more common among patients in the lowest and highest PfHRP2 concentration quintiles. Conclusions. The plasma PfHRP2 concentration defines malaria-attributable disease and distinguishes severe malaria from coincidental parasitemia in African children in a moderate-to-high transmission setting. PMID:23136222
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
NASA Astrophysics Data System (ADS)
Yang, Yue; Weaver, Michael N.; Merz, Kenneth M.
2009-08-01
Computational chemists have long demonstrated great interest in finding ways to reliably and accurately predict the molecular properties for transition-metal-containing complexes. This study is a continuation of our validation efforts of density functional theory (DFT) methods when applied to transition-metal-containing systems (Riley, K.E.; Merz, K. M., Jr. J. Phys. Chem. 2007, 111, 6044-6053). In our previous work we examined DFT using all-electron basis sets, but approaches incorporating effective core potentials (ECPs) are effective in reducing computational expense. With this in mind, our efforts were expanded to include evaluation of the performance of the basis set derived to approximate such an approach as well on the same set of density functionals. Indeed, employing an ECP basis such as LANL2DZ (Los Alamos National Laboratory 2 double ζ) for transition metals, while using all-electron basis sets for all other non-transition-metal atoms, has become more and more popular in computations on transition-metal-containing systems. In this study, we assess the performance of 12 different DFT functionals, from the GGA (generalized gradient approximation), hybrid-GGA, meta-GGA, and hybrid-meta-GGA classes, respectively, along with the 6-31+G** + LANL2DZ (on the transition metal) mixed basis set in predicting two important molecular properties, heats of formation and ionization potentials, for 94 and 58 systems containing first-row transition metals from Ti to Zn, which are all in the third row of the periodic table. An interesting note is that the inclusion of the exact exchange term in density functional methods generally increases the accuracy of ionization potential prediction for the hybrid-GGA methods but decreases the reliability of determining the heats of formation for transition-metal-containing complexes for all hybrid density functional methods. The hybrid-GGA functional B3LYP gives the best performance in predicting the ionization potentials, while the
Yao, Weiguang; Farr, Jonathan B
2015-01-01
Individual QA for IMRT/VMAT plans is required by protocols. Sometimes plans cannot pass the institute's QA criteria. For the Eclipse treatment planning system (TPS) with rounded leaf-end multileaf collimator (MLC), one practical way to improve the agreement of planned and delivered doses is to tune the value of dosimetric leaf gap (DLG) in the TPS from the measured DLG. We propose that this step may be necessary due to the complexity of the MLC system, including dosimetry of small fields and the tongue-and-groove (T&G) effects, and report our use of test fields to obtain linac-specific optimal DLGs in TPSs. More than 20 original patient plans were reoptimized with the linac-specific optimal DLG value. We examined the distribution of gaps and T&G extensions in typical patient plans and the effect of using the optimal DLG on the distribution. The QA pass rate of patient plans using the optimal DLG was investigated. The dose-volume histograms (DVHs) of targets and organs at risk were checked. We tested three MLC systems (Varian millennium 120 MLC, high-definition 120 MLC, and Siemens 160 MLC) installed in four Varian linear accelerators (linacs) (TrueBEAM STx, Trilogy, Clinac 2300 iX, and Clinac 21 EX) and 1 Siemens linac (Artiste). With an optimal DLG, the individual QA for all those patient plans passed the institute's criteria (95% in DTA test or gamma test with 3%/3 mm/10%), even though most of these plans had failed to pass QA when using original DLGs optimized from typical patient plans or from the optimization process (automodeler) of Pinnacle TPS. Using either our optimal DLG or one optimized from typical patient plans or from the Pinnacle optimization process yielded similar DVHs. PMID:26218999
Singh, Poonam; Dhiman, Ramesh C
2016-01-01
In India, malaria transmission is prevalent across diverse geologies and ecologies. Temperature is one of the key determinants of malarial transmission, causing low endemicity in some areas than in others. Using a degree-day model, we estimated the maximum and minimum possible number of days needed to complete a malarial sporogonic cycle (SC), in addition to the possible number of SCs for Plasmodium vivax and Plasmodium falciparum under two different ecological settings with either low or high endemicity for malaria at different elevations. In Raikhalkhatta (in the Himalayan foothills) SCs were modeled as not occurring from November to February, whereas in Gandhonia village (forested hills), all but only one month were suitable for malarial SCs. A minimum of 6 days and maximum of 46 days were required for completion of one SC. Forested hilly areas were more suitable for malaria parasite development in terms of SCs (25 versus 21 for P. falciparum and 32 versus 27 for P. vivax). Degree-days also provided a climatic explanation for the current transmission of malaria at different elevations. The calculation of degree-days and possible SC has applications in the regional analysis of transmission dynamics and management of malaria in view of climate change.
Gao, Wei; Feng, Huajie; Xuan, Xiaopeng; Chen, Liuping
2012-10-01
An assessment study is presented about energy decomposition analysis (EDA) in combination with DFT including revised dispersion correction (DFT-D3) with Slater-type orbital (STO) basis set. There has been little knowledge about the performance of the EDA + DFT-D3 concerning STOs. In this assessment such an approach was applied to calculate noncovalent interaction energies and their corresponding components. Complexes in S22 set were used to evaluate the performance of EDA in conjunction with four representative types of GGA-functionals of DFT-D3 (BP86-D3, BLYP-D3, PBE-D3 and SSB-D3) with three STO basis sets ranging in complexity from DZP, TZ2P to QZ4P. The results showed that the approach of EDA + BLYP-D3/TZ2P has a better performance not only in terms of calculating noncovalent interaction energy quantitatively but also in analyzing corresponding energy components qualitatively. This approach (EDA + BLYP-D3/TZ2P) was thus applied further to two representative large-system complexes including porphine dimers and fullerene aggregates to gain a better insight into binding characteristics.
Orlando, Roberto; Lacivita, Valentina; Bast, Radovan; Ruud, Kenneth
2010-06-28
The computational scheme for the evaluation of the second-order electric susceptibility tensor in periodic systems, recently implemented in the CRYSTAL code within the coupled perturbed Hartree-Fock (HF) scheme, has been extended to local-density, gradient-corrected, and hybrid density functionals (coupled-perturbed Kohn-Sham) and applied to a set of cubic and hexagonal semiconductors. The method is based on the use of local basis sets and analytical calculation of derivatives. The high-frequency dielectric tensor (epsilon(infinity)) and second-harmonic generation susceptibility (d) have been calculated with hybrid functionals (PBE0 and B3LYP) and the HF approximation. Results are compared with the values of epsilon(infinity) and d obtained from previous plane-wave local density approximation or generalized gradient approximation calculations and from experiment. The agreement is in general good, although comparison with experiment is affected by a certain degree of uncertainty implicit in the experimental techniques.
Saltychev, Mikhail; Tarvonen-Schröder, Sinikka; Eskola, Merja; Laimi, Katri
2013-06-01
To evaluate the adequacy of abbreviated versions of International Classification of Functioning, Disability and Health (ICF) (the WHO ICF Checklist and the ICF Comprehensive Core Set for Stroke) with respect to the specific clinical needs of a stroke rehabilitation unit before their implementation at a practical level. Common descriptions of functional limitations were identified from patient records of 10 subsequent subacute stroke patients referred to an inpatient multiprofessional rehabilitation unit of a university hospital. These descriptions were then converted into ICF categories, and the list was compared with the ICF Checklist of the WHO and the ICF Comprehensive and Brief Core Sets for Stroke developed by the ICF Research Branch. From the study population (50% women), 71 different, second-level ICF categories were identified, averaging 36.4 categories/patient (SD 5.8, range 28-46). Except for one category, all of the categories identified were also found in the ICF Comprehensive Core Set for Stroke. Of the categories identified, 49 (69%) were found in the WHO ICF Checklist. All except one category included in the ICF Brief Core Set for Stroke were also in our list. The Comprehensive Core Set for Stroke was found to be a good potential starting point for the practical implementation of the ICF in a stroke rehabilitation unit.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Duerr, Fabian; Benítez, Pablo; Miñano, Juan C; Meuret, Youri; Thienpont, Hugo
2012-02-27
In this work, a new two-dimensional optics design method is proposed that enables the coupling of three ray sets with two lens surfaces. The method is especially important for optical systems designed for wide field of view and with clearly separated optical surfaces. Fermat's principle is used to deduce a set of functional differential equations fully describing the entire optical system. The presented general analytic solution makes it possible to calculate the lens profiles. Ray tracing results for calculated 15th order Taylor polynomials describing the lens profiles demonstrate excellent imaging performance and the versatility of this new analytic design method. PMID:22418364
Allsman, R.; Barrett, K.; Busby, L.; Chiu, Y.; Crotinger, J.; Dubois, B.; Dubois, P.F.; Langdon, B.; Motteler, Z.C.; Takemoto, J.; Taylor, S.; Willmann, P.; Wilson, S. )
1993-08-01
BASIS9.4 is a system for developing interactive computer programs in Fortran, with some support for C and C++ as well. Using BASIS9.4 you can create a program that has a sophisticated programming language as its user interface so that the user can set, calculate with, and plot, all the major variables in the program. The program author writes only the scientific part of the program; BASIS9.4 supplies an environment in which to exercise that scientific programming which includes an interactive language, an interpreter, graphics, terminal logs, error recovery, macros, saving and retrieving variables, formatted I/O, and online documentation.
NASA Astrophysics Data System (ADS)
Dattani, Nikesh S.; Sharma, Sandeep; Alavi, Ali
2016-06-01
Being the simplest uncharged homonuclear dimer after H_2 that has a stable ground state, Li_2 is one of the most important benchmark systems for theory and experiment. In 1930, Delbruck used Li_2 to test his theory of homopolar binding, and it was used again and again as a prototype to test what have now become some of the most ubiquitous concepts in molecular physics (LCAO, SCF, MO, just to name a few). Experimentally, Roscoe and Schuster studied alkali dimers back in 1874. At the dawn of quantum mechanics, the emerging types of spectroscopic analyses we now use today, were tested on Li_2 in the labs of Wurm (1928), Harvey (1929), Lewis (1931), and many others, independently. Li_2 was at the centre of the development of PFOODR in the 80s, and PAS in the 90s; and Lithium Bose-Einstein condensates were announced only 1 month after the Nobel Prize winning BEC announcement in 1995. Even now in the 2010s, numerous experimental and theoretical studies on Li have tested QED up to the 7th power of the fine structure constant. Li_2 has also been of interest to sub-atomic physicists, as it was spectroscopic measurements on ^7Li_2 that determined the spin of ^7Li to be 3/2 in 1931; and Li_2 has been proposed in 2014 as a candidate for the first ``halo nucleonic molecule". The lowest triplet state a(1^3Σ_u^+) is an excellent benchmark system for all newly emerging ab initio techniques because it has only 6e^-, its potential is only 334 cm-1 deep, it avoids harsh complications from spin-orbit coupling, and it is the deepest potential for which all predicted vibrational energy levels have been observed with 0.0001 cm-1 precision. However the current best ab initio potentials do not even yield all vibrational energy spacings correct to within 1 cm-1. This could be because the calculation was only done on a cc-pV5Z basis set, or because the QCISD(T,full) method that the authors used, only considered triple excitations while a full CI calculation should include up to hexuple
NASA Astrophysics Data System (ADS)
Merthe, Daniel J.; Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Kunz, Martin; Tamura, Nobumichi; McKinney, Wayne R.; Artemiev, Nikolay A.; Celestre, Richard S.; Morrison, Gregory Y.; Anderson, Erik; Smith, Brian V.; Domning, Edward E.; Rekawa, Senajith B.; Padmore, Howard A.
2012-09-01
We demonstrate a comprehensive and broadly applicable methodology for the optimal in situ configuration of bendable soft x-ray Kirkpatrick-Baez mirrors. The mirrors used for this application are preset at the ALS Optical Metrology Laboratory prior to beamline installation. The in situ methodology consists of a new technique for simultaneously setting the height and pitch angle of each mirror. The benders of both mirrors were then optimally tuned in order to minimize ray aberrations to a level below the diffraction-limited beam waist size of 200 nm (horizontal) × 100 nm (vertical). After applying this methodology, we measured a beam waist size of 290 nm (horizontal) × 130 nm (vertical) with 1 nm light using the Foucault knife-edge test. We also discuss the utility of using a grating-based lateral shearing interferometer with quantitative wavefront feedback for further improvement of bendable optics.
NASA Astrophysics Data System (ADS)
Merthe, Daniel J.; Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Kunz, Martin; Tamura, Nobumichi; McKinney, Wayne R.; Artemiev, Nikolay A.; Celestre, Richard S.; Morrison, Gregory Y.; Anderson, Erik H.; Smith, Brian V.; Domning, Edward E.; Rekawa, Senajith B.; Padmore, Howard A.
2013-03-01
We demonstrate a comprehensive and broadly applicable methodology for the optimal in situ configuration of bendable soft x-ray Kirkpatrick-Baez mirrors. The mirrors used for this application are preset at the Advanced Light Source Optical Metrology Laboratory prior to beamline installation. The in situ methodology consists of a new technique for simultaneously setting the height and pitch angle of each mirror. The benders of both mirrors were then optimally tuned in order to minimize ray aberrations to a level below the diffraction-limited beam waist size of 200 nm (horizontal)×100 nm (vertical). After applying this methodology, we measured a beam waist size of 290 nm (horizontal)×130 nm (vertical) with 1 nm light using the Foucault knife-edge test. We also discuss the utility of using a grating-based lateral shearing interferometer with quantitative wavefront feedback for further improvement of bendable optics.
NASA Astrophysics Data System (ADS)
Mendoza, Carlos S.; Safdar, Nabile; Myers, Emmarie; Kittisarapong, Tanakorn; Rogers, Gary F.; Linguraru, Marius George
2013-02-01
Craniosynostosis (premature fusion of skull sutures) is a severe condition present in one of every 2000 newborns. Metopic craniosynostosis, accounting for 20-27% of cases, is diagnosed qualitatively in terms of skull shape abnormality, a subjective call of the surgeon. In this paper we introduce a new quantitative diagnostic feature for metopic craniosynostosis derived optimally from shape analysis of CT scans of the skull. We built a robust shape analysis pipeline that is capable of obtaining local shape differences in comparison to normal anatomy. Spatial normalization using 7-degree-of-freedom registration of the base of the skull is followed by a novel bone labeling strategy based on graph-cuts according to labeling priors. The statistical shape model built from 94 normal subjects allows matching a patient's anatomy to its most similar normal subject. Subsequently, the computation of local malformations from a normal subject allows characterization of the points of maximum malformation on each of the frontal bones adjacent to the metopic suture, and on the suture itself. Our results show that the malformations at these locations vary significantly (p<0.001) between abnormal/normal subjects and that an accurate diagnosis can be achieved using linear regression from these automatic measurements with an area under the curve for the receiver operating characteristic of 0.97.
Li, Y. Q.; Ma, F. C.; Sun, M. T.
2013-10-21
A full three-dimensional global potential energy surface is reported first time for the title system, which is important for the photodissociation processes. It is obtained using double many-body expansion theory and an extensive set of accurate ab initio energies extrapolated to the complete basis set limit. Such a work can be recommended for dynamics studies of the N({sup 2}D) + H{sub 2} reaction, a reliable theoretical treatment of the photodissociation dynamics and as building blocks for constructing the double many-body expansion potential energy surface of larger nitrogen/hydrogen containing systems. In turn, a preliminary theoretical study of the reaction N({sup 2}D)+H{sub 2}(X{sup 1}Σ{sub g}{sup +})(ν=0,j=0)→NH(a{sup 1}Δ)+H({sup 2}S) has been carried out with the method of quasi-classical trajectory on the new potential energy surface. Integral cross sections and thermal rate constants have been calculated, providing perhaps the most reliable estimate of the integral cross sections and the rate constants known thus far for such a reaction.
Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame
NASA Technical Reports Server (NTRS)
Le Bail, Karine; Gordon, David
2010-01-01
Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.
Perreault-Micale, Cynthia; Davie, Jocelyn; Breton, Benjamin; Hallam, Stephanie; Greger, Valerie
2015-01-01
Carrier screening for certain diseases is recommended by major medical and Ashkenazi Jewish (AJ) societies. Most carrier screening panels test only for common, ethnic-specific variants. However, with formerly isolated ethnic groups becoming increasingly intermixed, this approach is becoming inadequate. Our objective was to develop a rigorous process to curate all variants, for relevant genes, into a database and then apply stringent clinical validity classification criteria to each in order to retain only those with clear evidence for pathogenicity. The resulting variant set, in conjunction with next-generation DNA sequencing (NGS), then affords the capability for an ethnically diverse, comprehensive, highly specific carrier-screening assay. The clinical utility of our approach was demonstrated by screening a pan-ethnic population of 22,864 individuals for Bloom syndrome carrier status using a BLM variant panel comprised of 50 pathogenic variants. In addition to carriers of the common AJ founder variant, we identified 57 carriers of other pathogenic BLM variants. All variants reported had previously been curated and their clinical validity documented, or were of a type that met our stringent, preassigned validity criteria. Thus, it was possible to confidently report an increased number of Bloom’s syndrome carriers compared to traditional, ethnicity-based screening, while not reducing the specificity of the screening due to reporting variants of unknown clinical significance. PMID:26247052
Santitissadeekorn, Naratip; Froyland, Gary; Monahan, Adam
2010-11-01
The "edge" of the Antarctic polar vortex is known to behave as a barrier to the meridional (poleward) transport of ozone during the austral winter. This chemical isolation of the polar vortex from the middle and low latitudes produces an ozone minimum in the vortex region, intensifying the ozone hole relative to that which would be produced by photochemical processes alone. Observational determination of the vortex edge remains an active field of research. In this paper, we obtain objective estimates of the structure of the polar vortex by introducing a technique based on transfer operators that aims to find regions with minimal external transport. Applying this technique to European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 three-dimensional velocity data, we produce an improved three-dimensional estimate of the vortex location in the upper stratosphere where the vortex is most pronounced. This computational approach has wide potential application in detecting and analyzing mixing structures in a variety of atmospheric, oceanographic, and general fluid dynamical settings. PMID:21230580
NASA Astrophysics Data System (ADS)
Filatyev, A. S.; Yanova, O. V.
2013-12-01
A problem of through optimization of fail-safe branched trajectories of launchers in view of aerodynamic load constraints and restrictions on ground impact areas of separated parts (SP) is considered. The failsafety is regarded to the possibility of a recoverable vehicle (RV) to return from any point of the ascent trajectory to landing points without excess of allowable g-loads. So, the purpose is determination of the launcher optimal control in view of constraints on all trajectory branches: the main, corresponding to an active injection leg, and side branches, corresponding to SP fall trajectories and imaginary RV emergency trajectories, which form a continuum. The problem solution is based on the Pontryagin maximum principle (PMP).
Edmonds, Andrew; Feinstein, Lydia; Okitolonda, Vitus; Thompson, Deidre; Kawende, Bienvenu; Behets, Frieda
2016-01-01
Background The consequences of decentralizing prevention of mother-to-child HIV transmission and HIV-exposed infant services to antenatal care (ANC)/labor and delivery (L&D) sites from dedicated HIV care and treatment (C&T) centers remain unknown, particularly in low prevalence settings. Methods In a cohort of mother–infant pairs, we compared delivery of routine services at ANC/L&D and C&T facilities in Kinshasa, Democratic Republic of Congo from 2010–2013, using methods accounting for competing risks (eg, death). Women could opt to receive interventions at 90 decentralized ANC/L&D sites, or 2 affiliated C&T centers. Additionally, we assessed decentralization’s population-level impacts by comparing proportions of women and infants receiving interventions before (2009–2010) and after (2011–2013) decentralization. Results Among newly HIV-diagnosed women (N = 1482), the 14-week cumulative incidence of receiving the package of CD4 testing and zidovudine or antiretroviral therapy was less at ANC/L&D [66%; 95% confidence interval (CI): 63% to 69%] than at C&T (88%; 95% CI: 83% to 92%) sites (subdistribution hazard ratio, 0.62; 95% CI: 0.55 to 0.69). Delivery of cotrimoxazole and DNA polymerase chain reaction testing to HIV-exposed infants (N = 1182) was inferior at ANC/L&D sites (subdistribution hazard ratio, 0.84; 95% CI: 0.76 to 0.92); the 10-month cumulative incidence of the package at ANC/L&D sites was 89% (95% CI: 82% to 93%) versus 97% (95% CI: 93% to 99%) at C&T centers. Receipt of the pregnancy (20% of 1518, to 64% of 1405) and infant (16%–31%) packages improved post decentralization. Conclusions Services were delivered less efficiently at ANC/L&D sites than C&T centers. Although access improved with decentralization, its potential cannot be realized without sufficient and sustained support. PMID:26262776
2011-01-01
Introduction The analysis of flow and pressure waveforms generated by ventilators can be useful in the optimization of patient-ventilator interactions, notably in chronic obstructive pulmonary disease (COPD) patients. To date, however, a real clinical benefit of this approach has not been proven. Methods The aim of the present randomized, multi-centric, controlled study was to compare optimized ventilation, driven by the analysis of flow and pressure waveforms, to standard ventilation (same physician, same initial ventilator setting, same time spent at the bedside while the ventilator screen was obscured with numerical data always available). The primary aim was the rate of pH normalization at two hours, while secondary aims were changes in PaCO2, respiratory rate and the patient's tolerance to ventilation (all parameters evaluated at baseline, 30, 120, 360 minutes and 24 hours after the beginning of ventilation). Seventy patients (35 for each group) with acute exacerbation of COPD were enrolled. Results Optimized ventilation led to a more rapid normalization of pH at two hours (51 vs. 26% of patients), to a significant improvement of the patient's tolerance to ventilation at two hours, and to a higher decrease of PaCO2 at two and six hours. Optimized ventilation induced physicians to use higher levels of external positive end-expiratory pressure, more sensitive inspiratory triggers and a faster speed of pressurization. Conclusions The analysis of the waveforms generated by ventilators has a significant positive effect on physiological and patient-centered outcomes during acute exacerbation of COPD. The acquisition of specific skills in this field should be encouraged. Trial registration ClinicalTrials.gov NCT01291303. PMID:22115190
Karki, Kishor; Hugo, Geoffrey D; Ford, John C; Olsen, Kathryn M; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth
2015-10-21
The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5 T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR ≈ 4500 ms, TE = 74 ms, eight b-values of 0-1000 μs μm(-2), pixel size = 1.98 × 1.98 mm(2), slice thickness = 6 mm, interslice gap = 1.2 mm, 7 axial slices and total acquisition time ≈6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0-2000 μs μm(-2) from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250-1000 μs μm(-2) were not significantly different from ADCIVIM values (p > 0.05, paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets-0-1000; 50-1000; 100-1000; 500-1000; and 250 and 800 μs μm(-2) were significantly different from the ADCIVIM values. From Rician noise
Karki, Kishor; Hugo, Geoffrey D.; Ford, John C.; Olsen, Kathryn M.; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth
2015-01-01
The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR ≈ 4500 ms, TE = 74 ms, eight b-values of 0–1000 µs/µm2, pixel size = 1.98×1.98 mm2, slice thickness = 6 mm, interslice gap = 1.2 mm, 7 axial slices and total acquisition time ≈ 6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0–2000 µs/µm2 from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250–1000 µs/µm2 were not significantly different from ADCIVIM values (p > 0.05, paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets- 0–1000; 50–1000; 100–1000; 500–1000; and 250 and 800 µs/µm2 were significantly different from the ADCIVIM values. From Rician noise simulation using b-value pairs, there was a wide range of
Brndiar, Ján; Štich, Ivan
2012-07-10
Interaction energies of small model van der Waals fragments of group VA (P, As, Sb) and group VIA (S, Se, Te) are calculated using the complete basis set CCSD(T) method and compared to density functional results with approximate treatment of dispersion interaction using vdW-DF- and DFT-D-types of theories. These simple systems show surprising diversity of electronic properties ranging from more "metallic" to more "insulator" like, a property which needs to be captured in the approximate methods. While none of the standard approximate DFT theories provides an entirely satisfactory description of all the systems, we identify the most reliable approaches of each type. In addition, we show that results can be further tuned to chemical accuracy. In vdW-DF theory, guided by physical insights and the availability of quasi-exact CCSD(T) results, we supply the missing parts of correlation by matching an appropriate hybrid/semilocal exchange-correlation functional to describe short-/medium-range correlations accurately. In the DFT-D-type of theories, we reparametrize the empirical dispersion term. Since for such an accurate treatment benchmark calculations are needed, which typically is feasible only for a finite cluster, we argue that the cluster based model of the exchange-correlation hole is transferrable also to extended systems with vdW dispersion interactions.
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
Perczel, András; Farkas, Odön; Jákli, Imre; Topol, Igor A; Csizmadia, Imre G
2003-07-15
At the dawn of the new millenium, new concepts are required for a more profound understanding of protein structures. Together with NMR and X-ray-based 3D-structure determinations in silico methods are now widely accepted. Homology-based modeling studies, molecular dynamics methods, and quantum mechanical approaches are more commonly used. Despite the steady and exponential increase in computational power, high level ab initio methods will not be in common use for studying the structure and dynamics of large peptides and proteins in the near future. We are presenting here a novel approach, in which low- and medium-level ab initio energy results are scaled, thus extrapolating to a higher level of information. This scaling is of special significance, because we observed previously on molecular properties such as energy, chemical shielding data, etc., determined at a higher theoretical level, do correlate better with experimental data, than those originating from lower theoretical treatments. The Ramachandran surface of an alanine dipeptide now determined at six different levels of theory [RHF and B3LYP 3-21G, 6-31+G(d) and 6-311++G(d,p)] serves as a suitable test. Minima, first-order critical points and partially optimized structures, determined at different levels of theory (SCF, DFT), were completed with high level energy calculations such as MP2, MP4D, and CCSD(T). For the first time three different CCSD(T) sets of energies were determined for all stable B3LYP/6-311++G(d,p) minima of an alanine dipeptide. From the simplest ab initio data (e.g., RHF/3-21G) to more complex results [CCSD(T)/6-311+G(d,p)//B3LYP/6-311++G(d,p)] all data sets were compared, analyzed in a comprehensive manner, and evaluated by means of statistics.
NASA Astrophysics Data System (ADS)
Karki, Kishor; Hugo, Geoffrey D.; Ford, John C.; Olsen, Kathryn M.; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth
2015-10-01
The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5 T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with \\text{TR}≈ 4500 ms, TE = 74 ms, eight b-values of 0-1000 μs μm-2, pixel size = 1.98× 1.98 mm2, slice thickness = 6 mm, interslice gap = 1.2 mm, 7 axial slices and total acquisition time ≈6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0-2000 μs μm-2 from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250-1000 μs μm-2 were not significantly different from ADCIVIM values (p>0.05 , paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets—0-1000 50-1000 100-1000 500-1000 and 250 and 800 μs μm-2 were significantly different from the ADCIVIM values. From Rician noise simulation
NASA Astrophysics Data System (ADS)
Garner, Gregory; Reed, Patrick; Keller, Klaus
2015-04-01
Integrated assessment models (IAMs) are often used to inform the design of climate risk management strategies. Previous IAM studies have broken important new ground on analyzing the effects of parametric uncertainties, but they are often silent on the implications of uncertainties regarding the problem formulation. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the definition of the objective(s). The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decision makers, however, are often concerned with a broader range of values and preferences that may be poorly captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing (ii) the costs of abatement and (iii) the climate change damages. We use advanced multi-objective optimization methods to derive a set of Pareto-optimal solutions over which decision makers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.
Merhof, Dorit; Markiewicz, Pawel J; Platsch, Günther; Declerck, Jerome; Weih, Markus; Kornhuber, Johannes; Kuwert, Torsten; Matthews, Julian C; Herholz, Karl
2011-01-01
Multivariate image analysis has shown potential for classification between Alzheimer's disease (AD) patients and healthy controls with a high-diagnostic performance. As image analysis of positron emission tomography (PET) and single photon emission computed tomography (SPECT) data critically depends on appropriate data preprocessing, the focus of this work is to investigate the impact of data preprocessing on the outcome of the analysis, and to identify an optimal data preprocessing method. In this work, technetium-99methylcysteinatedimer ((99m)Tc-ECD) SPECT data sets of 28 AD patients and 28 asymptomatic controls were used for the analysis. For a series of different data preprocessing methods, which includes methods for spatial normalization, smoothing, and intensity normalization, multivariate image analysis based on principal component analysis (PCA) and Fisher discriminant analysis (FDA) was applied. Bootstrap resampling was used to investigate the robustness of the analysis and the classification accuracy, depending on the data preprocessing method. Depending on the combination of preprocessing methods, significant differences regarding the classification accuracy were observed. For (99m)Tc-ECD SPECT data, the optimal data preprocessing method in terms of robustness and classification accuracy is based on affine registration, smoothing with a Gaussian of 12 mm full width half maximum, and intensity normalization based on the 25% brightest voxels within the whole-brain region.
SETS. Set Equation Transformation System
Worrell, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system.
NASA Astrophysics Data System (ADS)
Fiorucci, I.; Muscari, G.; de Zafra, R. L.
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that
Karki, K; Hugo, G; Ford, J; Saraiya, S; Weiss, E; Olsen, K; Groves, R
2014-06-15
Purpose: Diffusion-weighted MRI (DW-MRI) is increasingly being investigated for radiotherapy planning and response assessment. Selection of a limited number of b-values in DW-MRI is important to keep geometrical variations low and imaging time short. We investigated various b-value sets to determine an optimal set for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADC_{IVIM}) in nonsmall cell lung cancer. Methods: Seven patients had 27 DW-MRI scans before and during radiotherapy in a 1.5T scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR=4500ms approximately, TE=74ms, pixel size=1.98X1.98mm{sub 2}, slice thickness=4–6mm and 7 axial slices. Diffusion gradients were applied to all three axes producing traceweighted images with eight b-values of 0–1000μs/μm{sup 2}. Monoexponential model ADC values using various b-value sets were compared to ADC_{IVIM} using all b-values. To compare the relative noise in ADC maps, intra-scan coefficient of variation (CV) of active tumor volumes was computed. Results: ADC_{IVIM}, perfusion coefficient and perfusion fraction for tumor volumes were in the range of 880-1622 μm{sup 2}/s, 8119-33834 μm{sup 2}/s and 0.104–0.349, respectively. ADC values using sets of 250, 800 and 1000; 250, 650 and 1000; and 250–1000μs/μm{sup 2} only were not significantly different from ADC_{IVIM}(p>0.05, paired t-test). Error in ADC values for 0–1000, 50–1000, 100–1000, 250–1000, 500–1000, and three b-value sets- 250, 500 and 1000; 250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2} were 15.0, 9.4, 5.6, 1.4, 11.7, 3.7, 2.0 and 0.2% relative to the reference-standard ADC_{IVIM}, respectively. Mean intrascan CV was 20.2, 20.9, 21.9, 24.9, 32.6, 25.8, 25.4 and 24.8%, respectively, whereas that for ADC_{IVIM} was 23.3%. Conclusion: ADC values of two 3 b-value sets
Odette, G Robert; Cunningham, Nicholas J., Wu, Yuan; Etienne, Auriane; Stergar, Erich; Yamamoto, Takuya
2012-02-21
The broad objective of this NEUP was to further develop a class of 12-15Cr ferritic alloys that are dispersion strengthened and made radiation tolerant by an ultrahigh density of Y-Ti-O nanofeatures (NFs) in the size range of less than 5 nm. We call these potentially transformable materials nanostructured ferritic alloys (NFAs). NFAs are typically processed by ball milling pre-alloyed rapidly solidified powders and yttria (Y2O3) powders. Proper milling effectively dissolves the Ti, Y and O solutes that precipitate as NFs during hot consolidation. The tasks in the present study included examining alternative processing paths, characterizing and optimizing the NFs and investigating solid state joining. Alternative processing paths involved rapid solidification by gas atomization of Fe, 14% Cr, 3% W, and 0.4% Ti powders that are also pre-alloyed with 0.2% Y (14YWT), where the compositions are in wt.%. The focus is on exploring the possibility of minimizing, or even eliminating, the milling time, as well as producing alloys with more homogeneous distributions of NFs and a more uniform, fine grain size. Three atomization environments were explored: Ar, Ar plus O (Ar/O) and He. The characterization of powders and alloys occurred through each processing step: powder production by gas atomization; powder milling; and powder annealing or hot consolidation by hot isostatic pressing (HIPing) or hot extrusion. The characterization studies of the materials described here include various combinations of: a) bulk chemistry; b) electron probe microanalysis (EPMA); c) atom probe tomography (APT); d) small angle neutron scattering (SANS); e) various types of scanning and transmission electron microscopy (SEM and TEM); and f) microhardness testing. The bulk chemistry measurements show that preliminary batches of gas-atomized powders could be produced within specified composition ranges. However, EPMA and TEM showed that the Y is heterogeneously distributed and phase separated, but
Miliordos, Evangelos; Xantheas, Sotiris S
2015-06-21
We report MP2 and Coupled Cluster Singles, Doubles, and perturbative Triples [CCSD(T)] binding energies with basis sets up to pentuple zeta quality for the (H2O)m=2-6,8 water clusters. Our best CCSD(T)/Complete Basis Set (CBS) estimates are -4.99 ± 0.04 kcal/mol (dimer), -15.8 ± 0.1 kcal/mol (trimer), -27.4 ± 0.1 kcal/mol (tetramer), -35.9 ± 0.3 kcal/mol (pentamer), -46.2 ± 0.3 kcal/mol (prism hexamer), -45.9 ± 0.3 kcal/mol (cage hexamer), -45.4 ± 0.3 kcal/mol (book hexamer), -44.3 ± 0.3 kcal/mol (ring hexamer), -73.0 ± 0.5 kcal/mol (D2d octamer), and -72.9 ± 0.5 kcal/mol (S4 octamer). We have found that the percentage of both the uncorrected (De) and basis set superposition error-corrected (De (CP)) binding energies recovered with respect to the CBS limit falls into a narrow range on either sides of the CBS limit for each basis set for all clusters. In addition, this range decreases upon increasing the basis set. Relatively accurate estimates (within <0.5%) of the CBS limits can be obtained when using the "23, 13" (for the AVDZ set) or the "12, 12" (for the AVTZ, AVQZ, and AV5Z sets) mixing ratio between De and De (CP). These mixing rations are determined via a least-mean-squares approach from a dataset that encompasses clusters of various sizes. Based on those findings, we propose an accurate and efficient computational protocol that can be presently used to estimate accurate binding energies of water clusters containing up to 30 molecules (for CCSD(T)) and up to 100 molecules (for MP2).
Nori-Shargh, Davood; Mousavi, Seiedeh Negar; Kayi, Hakan
2014-05-01
Complete basis set CBS-4, hybrid-density functional theory (hybrid-DFT: B3LYP/6-311+G**) based methods and natural bond orbital (NBO) interpretations have been used to examine the contributions of the hyperconjugative, electrostatic, and steric effects on the conformational behaviors of trans-2,3-dihalo-1,4-diselenane [halo = F (1), Cl (2), Br (3)] and trans-2,5-dihalo-1,4-diselenane [halo = F (4), Cl (5), Br (6)]. Both levels of theory showed that the axial conformation stability, compared to its corresponding equatorial conformation, decreases from compounds 1 → 3 and 4 → 6. Based on the results obtained from the NBO analysis, there are significant anomeric effects for compounds 1-6. The anomeric effect associated with the electron delocalization is in favor of the axial conformation and increases from compounds 1 → 3 and 4 → 6. On the other hand, dipole moment differences between the axial and equatorial conformations [Δ(μ(eq)-μ(ax)] decrease from compounds 1 → 3. Although Δ(μ(eq)-μ(ax)) parameter decreases from compound 1 to compound 3, the dipole moment values of the axial conformations are smaller than those of their corresponding equatorial conformations. Therefore, the anomeric effect associated with the electron delocalizations (for halogen-C-Se segments) and the electrostatic model associated with the dipole-dipole interactions fail to account for the increase of the equatorial conformations stability on going from compound 1 to compound 3. Since there is no dipole moment for the axial and equatorial conformations of compounds 4-6, consequently, the conformational preferences in compounds 1-6 is in general dictated by the steric hindrance factor associated with the 1,3-syn-axial repulsions. Importantly, the CBS-4 results show that the entropy difference (∆S) between the equatorial axial conformations increases from compounds 1 → 3 and 4 → 6. This fact can be explained by the anomeric effect associated
BASIS9.4. The Basis Code Development System
Allsman, R.; Barrett, K.; Busby, L.; Chiu, Y.; Crotinger, J.; Dubois, B.; Dubois, P.F.; Langdon, B.; Motteler, Z.C.; Takemoto, J.; Taylor, S.; Willmann, P.; Wilson, S.
1993-08-01
BASIS9.4 is a system for developing interactive computer programs in Fortran, with some support for C and C++ as well. Using BASIS9.4 you can create a program that has a sophisticated programming language as its user interface so that the user can set, calculate with, and plot, all the major variables in the program. The program author writes only the scientific part of the program; BASIS9.4 supplies an environment in which to exercise that scientific programming which includes an interactive language, an interpreter, graphics, terminal logs, error recovery, macros, saving and retrieving variables, formatted I/O, and online documentation.
NASA Astrophysics Data System (ADS)
Luangpaiboon, P.
2009-10-01
Many entrepreneurs face to extreme conditions for instances; costs, quality, sales and services. Moreover, technology has always been intertwined with our demands. Then almost manufacturers or assembling lines adopt it and come out with more complicated process inevitably. At this stage, products and service improvement need to be shifted from competitors with sustainability. So, a simulated process optimisation is an alternative way for solving huge and complex problems. Metaheuristics are sequential processes that perform exploration and exploitation in the solution space aiming to efficiently find near optimal solutions with natural intelligence as a source of inspiration. One of the most well-known metaheuristics is called Ant Colony Optimisation, ACO. This paper is conducted to give an aid in complicatedness of using ACO in terms of its parameters: number of iterations, ants and moves. Proper levels of these parameters are analysed on eight noisy continuous non-linear continuous response surfaces. Considering the solution space in a specified region, some surfaces contain global optimum and multiple local optimums and some are with a curved ridge. ACO parameters are determined through hybridisations of Modified Simplex and Simulated Annealing methods on the path of Steepest Ascent, SAM. SAM was introduced to recommend preferable levels of ACO parameters via statistically significant regression analysis and Taguchi's signal to noise ratio. Other performance achievements include minimax and mean squared error measures. A series of computational experiments using each algorithm were conducted. Experimental results were analysed in terms of mean, design points and best so far solutions. It was found that results obtained from a hybridisation with stochastic procedures of Simulated Annealing method were better than that using Modified Simplex algorithm. However, the average execution time of experimental runs and number of design points using hybridisations were
NASA Astrophysics Data System (ADS)
Woolf, Lawrence
2016-03-01
A wide variety of reports have been issued recently concerning the skills, knowledge, and attitudes needed by employees to be successful. This talk will review findings from reports from the major science and engineering disciplines, from surveys of employers, and from interviews with recent undergraduate physics graduates. Also to be discussed is the correlation between these findings and the detailed J-TUPP recommendations for the skills and knowledge needed by the next generation of undergraduate physics degree holders to be prepared for a diverse set of careers.
The role of orbital products in the optimized effective potential method
NASA Astrophysics Data System (ADS)
Kollmar, Christian; Filatov, Michael
2008-02-01
The orbital products of occupied and virtual orbitals are employed as an expansion basis for the charge density generating the local potential in the optimized effective potential method thus avoiding the use of auxiliary basis sets. The high computational cost arising from the quadratic increase of the dimension of this product basis with system size can be greatly reduced by elimination of the linearly dependent products according to a procedure suggested by Beebe and Linderberg [Int. J. Quantum Chem. 12, 683 (1977)]. Numerical results from this approach show a very good agreement with those obtained from balancing the auxiliary basis for the expansion of the local potential with the orbital basis set.
A Bayesian A-optimal and model robust design criterion.
Zhou, Xiaojie; Joseph, Lawrence; Wolfson, David B; Bélisle, Patrick
2003-12-01
Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A-optimality. We use this generalized A-optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A-optimality.
Tractable Pareto Optimization of Temporal Preferences
NASA Technical Reports Server (NTRS)
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
Detection of Gaseous Plumes using Basis Vectors
Chilton, Lawrence; Walsh, Stephen
2009-05-01
Detecting and identifying weak gaseous plumes using thermal imaging data is complicated by many factors. There are several methods currently being used to detect plumes. They can be grouped into two categories: those that use a chemical spectral library and those that don’t. The approaches that use chemical libraries include least squares methods and physics-based approaches. They are "optimal" only if the plume chemical is actually in the search set but risk missing chemicals not in the library. The methods that don’t use a chemical spectral library are based on a statistical or data analytical transformation applied to the data. These include principle components, independent components, entropy, Fourier transform, and others. These methods do not explicitly take advantage of the physics of the signal formulation process and therefore don’t exploit all available information in the data. This paper presents initial results of employing basis vectors as a tool for plume detection. It describes the standard generalized least squares approach using gas spectra, presents the detection approach using basis vectors, and compares detection images resulting from applying both methods to synthetic hyperspectral images.
Dickerson, Justin B; Smith, Matthew Lee; Dowdy, Diane M; McKinley, Ashley; Ahn, Sangnam; Ory, Marcia G
2011-01-01
This study examines the intention of advanced practice nurses (APNs) to utilize health optimization programs (HOPs) for addressing clients' chronic disease in various work settings (i.e., nursing homes or other care settings). A paper-based survey was administered to 270 APNs at a continuing education conference to determine their intentions to refer patients to HOPs for chronic disease management. APNs working in nursing homes were 0.23 times as likely to utilize HOPs for management of their patients' chronic disease compared with their counterparts working in other care settings (odds ratio = 0.23, confidence interval = 0.06-0.80, P = .021). APNs who had previously used a HOP for management of their patients' chronic disease were 5.2 times as likely to do so again relative to those who had not previously used a HOP for management of their patients' chronic disease (odds ratio = 5.17, confidence interval = 1.78-14.99, P = .002). Educational and organizational interventions are recommended to disseminate further HOPs for chronic disease in nursing home settings as part of an overall health optimization strategy. PMID:22055641
Gracis, Stefano
2003-04-01
In fabricating a prosthetic rehabilitation, whether it consists of just a single crown or a complete-mouth reconstruction, one of the main aims of the clinician is to simplify the procedures and reduce the time necessary to integrate it into the mouth of the patient This article completes the description of the rationale behind the selection of semiadjustable articulators and of a way to transfer to the laboratory technician valuable information that, in the case of extensive rehabilitations, will make occlusal optimization more error free.
Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar
2016-09-09
A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm.
Mubayi, V.
1995-05-01
The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.
Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar
2016-01-01
A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062
Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R.; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar
2016-01-01
A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062
Carver, Charles S; Scheier, Michael F
2014-06-01
Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism.
Simmons, M.L.; Wasserman, H.J.
1990-01-01
RISC System/6000 computers are workstations with a reduced instruction set processor recently developed by IBM. This report details the performance of the 6000-series computers as measured using a set of portable, standard-Fortran, computationally-intensive benchmark codes that represent the scientific workload at the Los Alamos National Laboratory. On all but three of our benchmark codes, the 40-ns RISC System was able to perform as well as a single Convex C-240 processor, a vector processor that also has a 40-ns clock cycle, and on these same codes, it performed as well as the FPS-500, a vector processor with a 30-ns clock cycle. 17 refs., 2 figs., 6 tabs.
The Basis Code Development System
1994-03-15
BASIS9.4 is a system for developing interactive computer programs in Fortran, with some support for C and C++ as well. Using BASIS9.4 you can create a program that has a sophisticated programming language as its user interface so that the user can set, calculate with, and plot, all the major variables in the program. The program author writes only the scientific part of the program; BASIS9.4 supplies an environment in which to exercise that scientificmore » programming which includes an interactive language, an interpreter, graphics, terminal logs, error recovery, macros, saving and retrieving variables, formatted I/O, and online documentation.« less
NASA Astrophysics Data System (ADS)
Myers, S. C.; Johannesson, G.; Simmons, N. A.
2011-04-01
We extend the Bayesloc seismic multiple-event location algorithm for application to global arrival time data sets. Bayesloc is a formulation of the joint probability distribution spanning multiple-event location parameters, including hypocenters, travel time corrections, pick precision, and phase labels. Stochastic priors may be used to constrain any of the Bayesloc parameters. Markov Chain Monte Carlo sampling is used to draw samples from the joint probability distribution, and the posteriori samples are summarized to infer conventional location parameters such as the hypocenter. The first application of the broad area Bayesloc algorithm is to a data set consisting of all well-recorded events in the Middle East and the most well-recorded events with 5° spatial sampling globally. This sampling strategy is designed to provide the ray coverage needed to determine lithospheric-scale P wave velocity structure in the Middle East using the complementary ray geometry provided by regional (subhorizontal) and teleseismic (subvertical) raypaths and to determine a consistent, albeit lower-resolution, image of global mantle structure. The data set consists of 5401 events and 878,535 P, Pn, pP, sP, and PcP arrivals recorded at 4606 stations. Relocated epicenters are an average of 16 km from bulletin locations. The data set included events that are known to an accuracy of 1 km (a.k.a. GT1) based on nonseismic information. The average distance between GT1 epicenters and our relocated epicenters is 5.6 km. For arrivals labeled P, Pn, and PcP, ˜92%, ˜90%, and 96% are properly labeled with probability >0.9, respectively. Incorrect phase labels are found to be erroneous at rates of 0.6%, 0.2%, 1.6%, and 2.5% for P, Pn, PcP, and depth phases (pP and sP), respectively. Labels found to be incorrect, but not erroneous, were reassigned to another phase label. P and Pn residual standard deviation with respect to ak135 travel times are dramatically reduced from 3.45 s to 1.01 s. The
Joseph W. Nielsen; Akira Tokurio; Robert Hiromoto; Jivan Khatry
2014-06-01
Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effective in evaluating risk associated with complex systems, but lack the capability to evaluate complex dynamic systems. These time and energy scales associated with the transient may vary as a function of transition time to a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems, while complete, results in issues associated with combinatorial explosion. In order to address the combinatorial complexity arising from the number of possible state configurations and discretization of transition times, a characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as a means to describe systems uniformly and thus provide means to describe relational constraints expected in the dynamics of a complex (coupled) systems. Thus when LENDIT is used to characterize four sets – ‘state, system, resource and response’ (S2R2) – describing reactor operations (normal and off-normal), LENDIT and S2R2 in combination have the potential to ‘branch and bound’ the state space investigated by DPRA. In this paper we introduce the concept of LENDIT scales and S2R2 sets applied to a branch-and-bound algorithm and apply the methods to a station black out transient (SBO).
Wildgruber, Moritz; Müller-Wille, René; Goessmann, Holger; Uller, Wibke; Wohlgemuth, Walter A.
2016-01-01
Objective The aim of the study was to calculate the effective dose during fluoroscopy-guided pediatric interventional procedures of the liver in a phantom model before and after adjustment of preset parameters. Methods Organ doses were measured in three anthropomorphic Rando-Alderson phantoms representing children at various age and body weight (newborn 3.5kg, toddler 10kg, child 19kg). Collimation was performed focusing on the upper abdomen representing mock interventional radiology procedures such as percutaneous transhepatic cholangiography and drainage placement (PTCD). Fluoroscopy and digital subtraction angiography (DSA) acquisitions were performed in a posterior-anterior geometry using a state of the art flat-panel detector. Effective dose was directly measured from multiple incorporated thermoluminescent dosimeters (TLDs) using two different parameter settings. Results Effective dose values for each pediatric phantom were below 0.1mSv per minute fluoroscopy, and below 1mSv for a 1 minute DSA acquisition with a frame rate of 2 f/s. Lowering the values for the detector entrance dose enabled a reduction of the applied effective dose from 12 to 27% for fluoroscopy and 22 to 63% for DSA acquisitions. Similarly, organ doses of radiosensitive organs could be reduced by over 50%, especially when close to the primary x-ray beam. Conclusion Modification of preset parameter settings enabled to decrease the effective dose for pediatric interventional procedures, as determined by effective dose calculations using dedicated pediatric Rando-Alderson phantoms. PMID:27556584
Tamhane, M.; Gautney, B.; Shiu, C.; Segaren, N.; Jeannis, L.; Eustache, C.; Simeon-Fadois, Y.; Chen, Y. H.; De, D.; Irivinti, S.; Tamma, P.; Thompson, C. B.; Khamadi, S.; Siberry, G.K.; Persaud, D.
2011-01-01
Background Nucleic-acid-testing (NAT) to diagnose HIV infection in children under age 18 months provides a barrier to HIV-testing in exposed children from resource-constrained settings. The ultrasensitive HIV- p24- antigen (Up24) assay is cheaper and easier to perform and is sensitive (84–98%) and specific (98–100%). The cut-point optical density (OD) selected for discriminating between positive and negative samples may need assessment due to regional differences in mother-to-child HIV-transmission rates. Objectives We used receiver operator characteristics (ROC) curves and logistic regression analyses to assess the effect of various cut-points on the diagnostic performance of Up24 for HIV-infection status among HIV-exposed children. Positive and negative predictive values at different rates of disease prevalence were also estimated. Study design A study of Up24 testing on dried blood spot (DBS) samples collected from 278 HIV-exposed Haitian children, 3–24-months of age, in whom HIV-infection status was determined by NAT on the same DBS card. Results The sensitivity and specificity of Up24 varied by the cut-point-OD value selected. At a cut-point-OD of 8-fold the standard deviation of the negative control (NCSD), sensitivity and specificity of Up24 were maximized [87.8% (95% CI, 83.9–91.6) and 92% (95% CI, 88.8–95.2), respectively]. In lower prevalence settings (5%), positive and negative predictive values of Up24 were maximal (75.9% and 98.8%, respectively) at a cut-point-OD that was 15-fold the NCSD. Conclusions In low prevalence settings, a high degree of specificity can be achieved with Up24 testing of HIV-exposed children when a higher cut-point OD is used; a feature that may facilitate more frequent use of Up24 antigen testing for HIV-exposed children. PMID:21330193
R.J. Garrett
2002-01-14
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
Authorization basis requirements comparison report
Brantley, W.M.
1997-08-18
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.
[Basis of radiation protection].
Roth, J; Schweizer, P; Gückel, C
1996-06-29
After an introduction, three selected contributions to the 10th Course on Radiation Protection held at the University Hospital of Basel are presented. The principles of radiation protection and new Swiss legislation are discussed as the basis for radiological protection. Ways are proposed of reducing radiation exposure while optimizing the X-ray picture with a minimum dose to patient and personnel. Radiation effects from low doses. From the beginning, life on this planet has been exposed to ionizing radiation from natural sources. For about one century additional irradiation has reached us from man-made sources as well. In Switzerland the overall annual radiation exposure from ambient and man-made sources amounts to about 4 mSv. The terrestrial and cosmic radiation and natural radionuclids in the body cause about 1.17 mSv (29%). As much as 1.6 mSv (40%) results from exposure to radon and its progenies, primarily inside homes. Medical applications contribute approximately 1 mSv (26%) to the annual radiation exposure and releases from atomic weapons, nuclear facilities and miscellaneous industrial operations yield less than 0.12 mSv (< 5%) to the annual dose. Observations of detrimental radiation effects from intermediate to high doses are challenged by observations of biopositive adaptive responses and hormesis following low dose exposure. The important question, whether cellular adaptive response or hormesis could cause beneficial effects to the human organism that would outweigh the detrimental effects attributed to low radiation doses, remains to be resolved. Whether radiation exerts a detrimental, inhibitory, modifying or even beneficial effect is likely to result from identical molecular lesions but to depend upon their quantity, localization and time scale of initiation, as well as the specific responsiveness of the cellular systems involved. For matters of radiation protection the bionegative radiation effects are classified as deterministic effects or
Performance Basis for Airborne Separation
NASA Technical Reports Server (NTRS)
Wing, David J.
2008-01-01
Emerging applications of Airborne Separation Assistance System (ASAS) technologies make possible new and powerful methods in Air Traffic Management (ATM) that may significantly improve the system-level performance of operations in the future ATM system. These applications typically involve the aircraft managing certain components of its Four Dimensional (4D) trajectory within the degrees of freedom defined by a set of operational constraints negotiated with the Air Navigation Service Provider. It is hypothesized that reliable individual performance by many aircraft will translate into higher total system-level performance. To actually realize this improvement, the new capabilities must be attracted to high demand and complexity regions where high ATM performance is critical. Operational approval for use in such environments will require participating aircraft to be certified to rigorous and appropriate performance standards. Currently, no formal basis exists for defining these standards. This paper provides a context for defining the performance basis for 4D-ASAS operations. The trajectory constraints to be met by the aircraft are defined, categorized, and assessed for performance requirements. A proposed extension of the existing Required Navigation Performance (RNP) construct into a dynamic standard (Dynamic RNP) is outlined. Sample data is presented from an ongoing high-fidelity batch simulation series that is characterizing the performance of an advanced 4D-ASAS application. Data of this type will contribute to the evaluation and validation of the proposed performance basis.
Neuromechanical Basis of Kinesiology.
ERIC Educational Resources Information Center
Enoka, Roger M.
This textbook provides a scientific basis for the study of human motion. The eight chapters are organized under three major sections. Part One--The Force-Motion Relationship--contains chapters on (1) motion; (2) force; (3) types of movement analysis. In Part Two--The Simple Joint System--chapters concern (4) simple joint system components; (5)…
Optimal piecewise locally linear modeling
NASA Astrophysics Data System (ADS)
Harris, Chris J.; Hong, Xia; Feng, M.
1999-03-01
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
A mixed basis density functional approach for one-dimensional systems with B-splines
NASA Astrophysics Data System (ADS)
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
An optimized proportional-derivative controller for the human upper extremity with gravity.
Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F
2015-10-15
When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design.
An optimized proportional-derivative controller for the human upper extremity with gravity.
Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F
2015-10-15
When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. PMID:26358531
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of
Basis for selecting optimum antibiotic regimens for secondary peritonitis.
Maseda, Emilio; Gimenez, Maria-Jose; Gilsanz, Fernando; Aguilar, Lorenzo
2016-01-01
Adequate management of severely ill patients with secondary peritonitis requires supportive therapy of organ dysfunction, source control of infection and antimicrobial therapy. Since secondary peritonitis is polymicrobial, appropriate empiric therapy requires combination therapy in order to achieve the needed coverage for both common and more unusual organisms. This article reviews etiological agents, resistance mechanisms and their prevalence, how and when to cover them and guidelines for treatment in the literature. Local surveillances are the basis for the selection of compounds in antibiotic regimens, which should be further adapted to the increasing number of patients with risk factors for resistance (clinical setting, comorbidities, previous antibiotic treatments, previous colonization, severity…). Inadequate antimicrobial regimens are strongly associated with unfavorable outcomes. Awareness of resistance epidemiology and of clinical consequences of inadequate therapy against resistant bacteria is crucial for clinicians treating secondary peritonitis, with delicate balance between optimization of empirical therapy (improving outcomes) and antimicrobial overuse (increasing resistance emergence).
NASA Astrophysics Data System (ADS)
Ulenikov, O. N.; Gromova, O. V.; Bekhtereva, E. S.; Berezkin, K. B.; Kashirina, N. V.; Tan, T. L.; Sydow, C.; Maul, C.; Bauerecker, S.
2016-09-01
The highly accurate (experimental accuracy in line positions ~(1 - 3) ×10-4cm-1) FTIR ro-vibrational spectra of CH2=CD2 in the region of 600-1300 cm-1, where the fundamental bands ν10, ν7, ν4, ν8, ν3, and ν6 are located, were recorded and analyzed with the Hamiltonian model which takes into account resonance interactions between all six studied bands. About 12 200 ro-vibrational transitions belonging to these bands (that is considerably more than it was made in the preceding studies for the bands ν10, ν7, ν8, ν3 and ν6; transitions belonging to the ν4 band were assign